Science.gov

Sample records for camera based positron

  1. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food... DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image the distribution of positron-emitting radionuclides in the...

  2. Positron emission particle tracking using the new Birmingham positron camera

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Forster, R. N.; Fowles, P.; Takhar, P. S.

    2002-01-01

    Since 1985 a positron camera consisting of a pair of multi-wire proportional chambers has been used at Birmingham for engineering studies involving positron emitting radioactive tracers. The technique of positron emission particle tracking (PEPT), developed at Birmingham, whereby a single tracer particle can be tracked at high speed, has proved particularly powerful. The main limitation of the original positron camera was its low sensitivity and correspondingly low data rate. A new positron camera has recently been installed; it consists of a pair of NaI (Tl) gamma camera heads with fully digital readout and offers an enormous improvement in data rate and data quality. The performance of this camera, and in particular the improved capabilities it brings to the PEPT technique, are summarised.

  3. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  4. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  5. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  6. Positron emission particle tracking using a modular positron camera

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.

    2009-06-01

    The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.

  7. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation

  8. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL.... This generic type of device may include signal analysis and display equipment, patient and...

  9. A modular positron camera for the study of industrial processes

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.

    2011-10-01

    Positron imaging techniques rely on the detection of the back-to-back annihilation photons arising from positron decay within the system under study. A standard technique, called positron emitting particle tracking (PEPT) [1], uses a number of these detected events to rapidly determine the position of a positron emitting tracer particle introduced into the system under study. Typical applications of PEPT are in the study of granular and multi-phase materials in the disciplines of engineering and the physical sciences. Using components from redundant medical PET scanners a modular positron camera has been developed. This camera consists of a number of small independent detector modules, which can be arranged in custom geometries tailored towards the application in question. The flexibility of the modular camera geometry allows for high photon detection efficiency within specific regions of interest, the ability to study large and bulky systems and the application of PEPT to difficult or remote processes as the camera is inherently transportable.

  10. Characterization of the latest Birmingham modular positron camera

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.; Gargiuli, J.

    2011-10-01

    Positron imaging techniques rely on the detection of the back-to-back annihilation photons arising from positron decay within the field of view of a positron camera. A standard technique, called positron emitting particle tracking (PEPT), uses a number of these detected events to rapidly determine the position of a positron emitting tracer particle introduced into the system under study. Conventionally, PEPT is performed using a positron camera with fixed geometry. Recently, however, a more flexible detection system (the modular positron camera) has been developed which allows customization of the detection geometry (i.e. allowed field-of-view) tailored for specific applications. Typically, PEPT is used to study particle dynamics, granular systems and multiphase flows. Presented in this paper are studies into the performance of the modular camera system, performed using a mixture of both Monte Carlo techniques and experimental validation. Studies of the stored event rate (and therefore particle location rate and location precision) have been performed and show a maximum data rate of 2.5 MHz, leading to particle location rates of 10 kHz with location precision of 0.5 mm in three dimensions.

  11. Toward the design of a positron volume imaging camera

    SciTech Connect

    Rogers, J.G.; Stazyk, M.; Harrop, R.; Dykstra, C.J.; Barney, J.S.; Atkins, M.S.; Kinahan, P.E. )

    1990-04-01

    Three different computing algorithms for performing positron emission image reconstruction have been compared using Monte Carlo phantom simulations. The work was motivated by the recent announcement of the commercial availability of a positron volume imaging camera which has improved axial (slice) resolution and retractable interslice septa. The simulations demonstrate the importance of developing a complete three-dimensional reconstruction algorithm to deal with the increased gamma detection solid angle and the increased scatter fraction that result when the interslice septa are removed from a ring tomograph.

  12. Developments in particle tracking using the Birmingham Positron Camera

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Allen, D. A.; Benton, D. M.; Fowles, P.; McNeil, P. A.; Tan, Min; Beynon, T. D.

    1997-02-01

    The RAL/Birmingham Positron Camera consists of a pair of MWPCs for detecting the pairs of back-to-back 511 keV photons arising from positron-electron annihilation. It was constructed in 1984 for the purpose of applying PET to engineering situations, and has been widely used for the non-invasive imaging of flow, including extensive studies on geological samples. The technique of Positron Emission Particle Tracking (PEPT), whereby a single positron-emitting tracer particle can be tracked at high speed, was developed at Birmingham and has proved a very powerful tool for studying the behaviour of granular materials in systems such as mixers and fluidised beds. In order to extend its effective field of view, the camera has recently been mounted on a motorised translation stage under computer control so that the motion of a tracer particle can be followed over a length of up to 1.5 m. A preliminary investigation into the feasibility of enhancing the PEPT technique using the singles count rates in the two detectors has also been undertaken.

  13. Development of the LBNL positron emission mammography camera

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Wang, Jimmy; Maltz, Jonathon S.; Qi, Jinyi; Mandelli, Emanuele; Moses, William W.

    2002-12-19

    We present the construction status of the LBNL Positron Emission Mammography (PEM) camera, which utilizes a PET detector module with depth of interaction measurement consisting of 64 LSO crystals (3x3x30 mm3) coupled on one end to a single photomultiplier tube (PMT) and on the opposite end to a 64 pixel array of silicon photodiodes (PDs). The PMT provides an accurate timing pulse, the PDs identify the crystal of interaction, the sum provides a total energy signal, and the PD/(PD+PMT) ratio determines the depth of interaction. We have completed construction of all 42 PEM detector modules. All data acquisition electronics have been completed, fully tested and loaded onto the gantry. We have demonstrated that all functions of the custom IC work using the production rigid-flex boards and data acquisition system. Preliminary detector module characterization and coincidence data have been taken using the production system, including initial images.

  14. Further improvements in the design of a positron camera with dense drift space MWPCs

    NASA Astrophysics Data System (ADS)

    Perez-Mendez, V.; Schwartz, G.; Nelson, W. R.; Bellazzini, R.; Del Guerra, A.; Massai, M. M.; Spandre, G.

    1983-11-01

    We describe the improvements achieved in the last three years towards the construction of a large solid angle positron camera with dense drift space MWPCs. A multiplane three-dimensional tomograph is proposed, made of six MWPC modules (active area 45 × 45 cm 2 each), arranged to form the lateral surface of a hexagonal prism. Its expected performance is presented and is shown to be very competitive with the multiring scintillator positron camera.

  15. Design considerations for a high-spatial-resolution positron camera with dense-drift-space MWPC's

    NASA Astrophysics Data System (ADS)

    Delguerra, A.; Perez-Mendez, V.; Schwartz, G.; Nelson, W. R.

    1982-10-01

    A multiplane Positron Camera is proposed, made of six MWPC modules arranged to form the lateral surface of a hexagonal prism. Each module (50 x 50 sq cm) has a 2 cm thick lead-glass tube converter on both sides of a MWPC pressurized to 2 atm. Experimental measurements are presented to show how to reduce the parallax error by determining in which of the two converter layers the photon has interacted. The results of a detailed Monte Carlo calculation for the efficiency of this type of converter are shown to be in excellent agreement with the experimental measurements. The expected performance of the Positron Camera is presented: a true coincidence rate of 56,000 counts/s (with an equal accidental coincidence rate and a 30% Compton scatter contamination) and a spatial resolution better than 5.0 mm (FWHM) for a 400 micron Ci pointlike source embedded in a 10 cm radius water phantom.

  16. Development of a high resolution beta camera for a direct measurement of positron distribution on brain surface

    SciTech Connect

    Yamamoto, S.; Seki, C.; Kashikura, K.

    1996-12-31

    We have developed and tested a high resolution beta camera for a direct measurement of positron distribution on brain surface of animals. The beta camera consists of a thin CaF{sub 2}(Eu) scintillator, a tapered fiber optics plate (taper fiber) and a position sensitive photomultiplier tube (PSPMT). The taper fiber is the key component of the camera. We have developed two types of beta cameras. One is 20mm diameter field of view camera for imaging brain surface of cats. The other is 10mm diameter camera for that of rats. Spatial resolutions of beta camera for cats and rats were 0.8mm FWHM and 0.5mm FWHM, respectively. We confirmed that developed beta cameras may overcome the limitation of the spatial resolution of the positron emission tomography (PET).

  17. Design considerations for a high-spatial-resolution positron camera with dense-drift-space MWPC's

    SciTech Connect

    Del Guerra, A.; Perez-Mendez, V.; Schwartz, G.; Nelson, W.R.

    1982-10-01

    A multiplane Positron Cameris is proposed, made of six MWPC modules arranged to form the lateral surface of a hexagonal prism. Each module (50 x 50 cm/sup 2/) has a 2 cm thick lead-glass tube converter on both sides of a MWPC pressurized to 2 atm. Experimental measurements are presented to show how to reduce the parallax error by determining in which of the two converter layers the photon has interacted. The results of a detailed Monte Carlo calculation for the efficiency of this type of converter are shown to be in excellent agreement with the experimental measurements. The expected performance of the Positron Camera is presented: a true coincidence rate of 56,000 counts/s (with an equal accidental coincidence rate and a 30% Compton scatter contamination) and a spatial resolution better than 5.0 mm (FWHM) for a 400 ..mu.. Ci point-like source embedded in a 10 cm radius water phantom.

  18. A CF4 based positron trap

    NASA Astrophysics Data System (ADS)

    Marjanovic, Srdjan; Bankovic, Ana; Dujko, Sasa; Deller, Adam; Cooper, Ben; Cassidy, David; Petrovic, Zoran

    2016-05-01

    All positron buffer gas traps in use rely on N2 as the primary trapping gas due to its conveniently placed a1 Π electronic excitation cross section that is large enough to compete with positronium (Ps) formation in the threshold region. Its energy loss of 8.5 eV is sufficient to capture positrons into a potential well upon a single collision. The competing Ps formation, however, limits the efficiency of the two stage trap to 25 %. As positron moderators produce beams with energies of several eV we have proposed to use CF4 in the first stage of the trap, due to its large vibrational excitation cross section, where several vibrational excitations would be sufficient to trap the positrons with small losses. Apart from the simulations we also report the results of attempts to apply this approach to an existing Surko-type positron trap. Operating the unmodified trap as a CF4 based device proved to be unsuccessful, due primarily to excessive scattering due to high CF4 pressure in the first stage. However, the performance was consistent with subsequent simulations using the real system parameters. This agreement indicates that an efficient CF4 based scheme may be realized in an appropriately designed trap. also at Serbian Academy of Sciences and Arts, Knez Mihajlova 35, 11000 Belgrade, Serbia.

  19. Design of POSICAM: A high resolution multislice whole body positron camera

    SciTech Connect

    Mullani, N.A.; Wong, W.H.; Hartz, R.K.; Bristow, D.; Gaeta, J.M.; Yerian, K.; Adler, S.; Gould, K.L.

    1985-01-01

    A high resolution (6mm), multislice (21) whole body positron camera has been designed with innovative detector and septa arrangement for 3-D imaging and tracer quantitation. An object of interest such as the brain and the heart is optimally imaged by the 21 simultaneous image planes which have 12 mm resolution and are separated by 5.5 mm to provide adequate sampling in the axial direction. The detector geometry and the electronics are flexible enough to allow BaF/sub 2/, BGO, GSO or time of flight BaF/sub 2/ scintillators. The mechanical gantry has been designed for clinical applications and incorporates several features for patient handling and comfort. A large patient opening of 58 cm diameter with a tilt of +-30/sup 0/ and rotation of +-20/sup 0/ permit imaging from different positions without moving the patient. Multiprocessor computing systems and user-friendly software make the POSICAM a powerful 3-D imaging device. 7 figs.

  20. Positron extraction and transport in a nuclear-reactor-based positron beam

    NASA Astrophysics Data System (ADS)

    van Veen, A.; Schut, H.; Labohm, F.; de Roode, J.

    1999-05-01

    This paper describes the design of a positron beam which is primarily based on positron generation by pair formation near the core of a nuclear reactor. Several configurations of the positron source have been tested. All rely on large emitting surface areas: ranging from 1000 to 2000 cm 2. For efficient extraction of the emitted positrons the positron emitting foils are arranged into 3-4 disks which are composed of cylinders with a 6-9 mm diameter and lengths from 10 to 40 mm. The electrical potential between adjacent disks could be varied up to 50 V so that a field gradient was present to carry the positrons through the cylinders. All source tests unexpectedly indicated a high reflectivity of moderated positrons at the tungsten surfaces of the moderation disks. A substantial number of positrons is emitted even without electrical field. A model including the effects of positron reflection and electrical field extraction explains the experimental results reasonably well. At 2 MW reactor power positrons were observed with an intensity of 0.7×10 8 e + s -1 in a 10 mm beam spot.

  1. A CF4 based positron trap

    NASA Astrophysics Data System (ADS)

    Marjanović, Srdjan; Banković, Ana; Cassidy, David; Cooper, Ben; Deller, Adam; Dujko, Saša; Petrović, Zoran Lj

    2016-11-01

    All buffer-gas positron traps in use today rely on N2 as the primary trapping gas due to its conveniently placed {{{a}}}1{{\\Pi }} electronic excitation cross-section. The energy loss per excitation in this process is 8.5 eV, which is sufficient to capture positrons from low-energy moderated beams into a Penning-trap configuration of electric and magnetic fields. However, the energy range over which this cross-section is accessible overlaps with that for positronium (Ps) formation, resulting in inevitable losses and setting an intrinsic upper limit on the overall trapping efficiency of ∼25%. In this paper we present a numerical simulation of a device that uses CF4 as the primary trapping gas, exploiting vibrational excitation as the main inelastic capture process. The threshold for such excitations is far below that for Ps formation and hence, in principle, a CF4 trap can be highly efficient; our simulations indicate that it may be possible to achieve trapping efficiencies as high as 90%. We also report the results of an attempt to re-purpose an existing two-stage N2-based buffer-gas positron trap. Operating the device using CF4 proved unsuccessful, which we attribute to back scattering and expansion of the positron beam following interactions with the CF4 gas, and an unfavourably broad longitudinal beam energy spread arising from the magnetic field differential between the source and trap regions. The observed performance was broadly consistent with subsequent simulations that included parameters specific to the test system, and we outline the modifications that would be required to realise efficient positron trapping with CF4. However, additional losses appear to be present which require further investigation through both simulation and experiment.

  2. Positron and positronium annihilation in silica-based thin films studied by a pulsed positron beam

    NASA Astrophysics Data System (ADS)

    Suzuki, R.; Ohdaira, T.; Kobayashi, Y.; Ito, K.; Shioya, Y.; Ishimaru, T.

    2003-10-01

    Positron and positronium annihilation in silica-based thin films has been investigated by means of measurement techniques with a monoenergetic pulsed positron beam. The age-momentum correlation study revealed that positron annihilation in thermally grown SiO 2 is basically the same as that in bulk amorphous SiO 2 while o-Ps in the PECVD grown SiCOH film predominantly annihilate with electrons of C and H at the microvoid surfaces. We also discuss time-dependent three-gamma annihilation in porous low- k films by two-dimensional positron annihilation lifetime spectroscopy.

  3. Scatter correction in scintillation camera imaging of positron-emitting radionuclides

    SciTech Connect

    Ljungberg, M.; Danfelter, M.; Strand, S.E.

    1996-12-31

    The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.

  4. Optimization of positrons generation based on laser wakefield electron acceleration

    NASA Astrophysics Data System (ADS)

    Wu, Yuchi; Han, Dan; Zhang, Tiankui; Dong, Kegong; Zhu, Bin; Yan, Yonghong; Gu, Yuqiu

    2016-08-01

    Laser based positron represents a new particle source with short pulse duration and high charge density. Positron production based on laser wakefield electron acceleration (LWFA) has been investigated theoretically in this paper. Analytical expressions for positron spectra and yield have been obtained through a combination of LWFA and cascade shower theories. The maximum positron yield and corresponding converter thickness have been optimized as a function of driven laser power. Under the optimal condition, high energy (>100 MeV ) positron yield up to 5 ×1011 can be produced by high power femtosecond lasers at ELI-NP. The percentage of positrons shows that a quasineutral electron-positron jet can be generated by setting the converter thickness greater than 5 radiation lengths.

  5. Industrial positron-based imaging: Principles and applications

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Hawkesworth, M. R.; Broadbent, C. J.; Fowles, P.; Fryer, T. D.; McNeil, P. A.

    1994-09-01

    Positron Emission Tomography (PET) has great potential as a non-invasive flow imaging technique in engineering, since 511 keV gamma-rays can penetrate a considerable thickness of (e.g.) steel. The RAL/Birmingham multiwire positron camera was constructed in 1984, with the initial goal of observing the lubricant distribution in operating aero-engines, automotive engines and gearboxes, and has since been used in a variety of industrial fields. The major limitation of the camera for conventional tomographic PET studies is its restricted logging rate, which limits the frequency with which images can be acquired. Tracking a single small positron-emitting tracer particle provides a more powerful means of observing high speed motion using such a camera. Following a brief review of the use of conventional PET in engineering, and the capabilities of the Birmingham camera, this paper describes recent developments in the Positron Emission Particle Tracking (PEPT) technique, and compares the results obtainable by PET and PEPT using, as an example, a study of axial diffusion of particles in a rolling cylinder.

  6. Fast 3D-EM reconstruction using Planograms for stationary planar positron emission mammography camera.

    PubMed

    Motta, A; Guerra, A Del; Belcari, N; Moehrs, S; Panetta, D; Righi, S; Valentini, D

    2005-12-01

    At the University of Pisa we are building a PEM prototype, the YAP-PEM camera, consisting of two opposite 6 x 6 x 3 cm3 detector heads of 30 x 30 YAP:Ce finger crystals, 2 x 2 x 30 mm3 each. The camera will be equipped with breast compressors. The acquisition will be stationary. Compared with a whole body PET scanner, a planar Positron Emission Mammography (PEM) camera allows a better, easier and more flexible positioning around the breast in the vicinity of the tumor: this increases the sensitivity and solid angle coverage, and reduces cost. To avoid software rejection of data during the reconstruction, resulting in a reduced sensitivity, we adopted a 3D-EM reconstruction which uses all of the collected Lines Of Response (LORs). This skips the PSF distortion given by data rebinning procedures and/or Fourier methods. The traditional 3D-EM reconstruction requires several times the computation of the LOR-voxel correlation matrix, or probability matrix {p(ij)}; therefore is highly time-consuming. We use the sparse and symmetry properties of the matrix {p(ij)} to perform fast 3D-EM reconstruction. Geometrically, a 3D grid of cubic voxels (FOV) is crossed by several divergent 3D line sets (LORs). The symmetries occur when tracing different LORs produces the same p(ij) value. Parallel LORs of different sets cross the FOV in the same way, and the repetition of p(ij) values depends on the ratio between the tube and voxel sizes. By optimizing this ratio, the occurrence of symmetries is increased. We identify a nucleus of symmetry of LORs: for each set of symmetrical LORs we choose just one LOR to be put in the nucleus, while the others lie outside. All of the possible p(ij) values are obtainable by tracking only the LORs of this nucleus. The coordinates of the voxels of all of the other LORs are given by means of simple translation rules. Before making the reconstruction, we trace the LORs of the nucleus to find the intersecting voxels, whose p(ij) values are computed and

  7. Camera-based driver assistance systems

    NASA Astrophysics Data System (ADS)

    Grimm, Michael

    2013-04-01

    In recent years, camera-based driver assistance systems have taken an important step: from laboratory setup to series production. This tutorial gives a brief overview on the technology behind driver assistance systems, presents the most significant functionalities and focuses on the processes of developing camera-based systems for series production. We highlight the critical points which need to be addressed when camera-based driver assistance systems are sold in their thousands, worldwide - and the benefit in terms of safety which results from it.

  8. Nanolithography based on an atom pinhole camera.

    PubMed

    Melentiev, P N; Zablotskiy, A V; Lapshin, D A; Sheshin, E P; Baturin, A S; Balykin, V I

    2009-06-10

    In modern experimental physics the pinhole camera is used when the creation of a focusing element (lens) is difficult. We have experimentally realized a method of image construction in atom optics, based on the idea of an optical pinhole camera. With the use of an atom pinhole camera we have built an array of identical arbitrary-shaped atomic nanostructures with the minimum size of an individual nanostructure element down to 30 nm on an Si surface. The possibility of 30 nm lithography by means of atoms, molecules and clusters has been shown.

  9. Van de Graaff based positron source production

    NASA Astrophysics Data System (ADS)

    Lund, Kasey Roy

    The anti-matter counterpart to the electron, the positron, can be used for a myriad of different scientific research projects to include materials research, energy storage, and deep space flight propulsion. Currently there is a demand for large numbers of positrons to aid in these mentioned research projects. There are different methods of producing and harvesting positrons but all require radioactive sources or large facilities. Positron beams produced by relatively small accelerators are attractive because they are easily shut down, and small accelerators are readily available. A 4MV Van de Graaff accelerator was used to induce the nuclear reaction 12C(d,n)13N in order to produce an intense beam of positrons. 13N is an isotope of nitrogen that decays with a 10 minute half life into 13C, a positron, and an electron neutrino. This radioactive gas is frozen onto a cryogenic freezer where it is then channeled to form an antimatter beam. The beam is then guided using axial magnetic fields into a superconducting magnet with a field strength up to 7 Tesla where it will be stored in a newly designed Micro-Penning-Malmberg trap. Several source geometries have been experimented on and found that a maximum antimatter beam with a positron flux of greater than 0.55x10 6 e+s-1 was achieved. This beam was produced using a solid rare gas moderator composed of krypton. Due to geometric restrictions on this set up, only 0.1-1.0% of the antimatter was being frozen to the desired locations. Simulations and preliminary experiments suggest that a new geometry, currently under testing, will produce a beam of 107 e+s-1 or more.

  10. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  11. Status of the Linac based positron source at Saclay

    NASA Astrophysics Data System (ADS)

    Rey, J.-M.; Coulloux, G.; Debu, P.; Dzitko, H.; Hardy, P.; Liszkay, L.; Lotrus, P.; Muranaka, T.; Noel, C.; Pérez, P.; Pierret, O.; Ruiz, N.; Sacquin, Y.

    2013-06-01

    Low energy positron beams are of major interest for fundamental science and materials science. IRFU has developed and built a slow positron source based on a compact, low energy (4.3 MeV) electron linac. The linac-based source will provide positrons for a magnetic storage trap and represents the first step of the GBAR experiment (Gravitational Behavior of Antimatter in Rest) recently approved by CERN for an installation in the Antiproton Decelerator hall. The installation built in Saclay will be described with its main characteristics. The ultimate target of the GBAR experiment will be briefly presented as well as the foreseen development of an industrial positron source dedicated for materials science laboratories.

  12. A slanting light-guide analog decoding high resolution detector for positron emission tomography camera

    SciTech Connect

    Wong, W.H.; Jing, M.; Bendriem, B.; Hartz, R.; Mullani, N.; Gould, K.L.; Michel, C.

    1987-02-01

    Current high resolution PET cameras require the scintillation crystals to be much narrower than the smallest available photomultipliers. In addition, the large number of photomultiplier channels constitutes the major component cost in the camera. Recent new designs use the Anger camera type of analog decoding method to obtain higher resolution and lower cost by using the relatively large photomultipliers. An alternative approach to improve the resolution and cost factors has been proposed, with a system of slanting light-guides between the scintillators and the photomultipliers. In the Anger camera schemes, the scintillation light is distributed to several neighboring photomultipliers which then determine the scintillation location. In the slanting light-guide design, the scintillation is metered and channeled to only two photomultipliers for the decision making. This paper presents the feasibility and performance achievable with the slanting light-guide detectors. With a crystal/photomultiplier ratio of 6/1, the intrinsic resolution was found to be 4.0 mm using the first non-optimized prototype light-guides on BGO crystals. The axial resolution will be about 5-6 mm.

  13. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  14. A high speed PC-based data acquisition and control system for positron imaging

    NASA Astrophysics Data System (ADS)

    Leadbeater, T. W.; Parker, D. J.

    2009-06-01

    A modular positron camera with a flexible geometry suitable for performing Positron Emission Particle Tracking (PEPT) studies on a wide range of applications has been constructed. The demand for high speed list mode data storage required for these experiments has motivated the development of an improved data acquisition system to support the existing detectors. A high speed PC-based data acquisition system is presented. This device replaces the old dedicated hardware with a compact, flexible device with the same functionality and superior performance. Data acquisition rates of up to 80 MBytes per second allow coincidence data to be saved to disk for real-time analysis or post processing. The system supports the storage of time information with resolution of a half millisecond and remote trigger data support. Control of the detector system is provided by high-level software running on the same computer.

  15. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  16. Plasma and trap-based techniques for science with positrons

    NASA Astrophysics Data System (ADS)

    Danielson, J. R.; Dubin, D. H. E.; Greaves, R. G.; Surko, C. M.

    2015-01-01

    In recent years, there has been a wealth of new science involving low-energy antimatter (i.e., positrons and antiprotons) at energies ranging from 102 to less than 10-3 eV . Much of this progress has been driven by the development of new plasma-based techniques to accumulate, manipulate, and deliver antiparticles for specific applications. This article focuses on the advances made in this area using positrons. However, many of the resulting techniques are relevant to antiprotons as well. An overview is presented of relevant theory of single-component plasmas in electromagnetic traps. Methods are described to produce intense sources of positrons and to efficiently slow the typically energetic particles thus produced. Techniques are described to trap positrons efficiently and to cool and compress the resulting positron gases and plasmas. Finally, the procedures developed to deliver tailored pulses and beams (e.g., in intense, short bursts, or as quasimonoenergetic continuous beams) for specific applications are reviewed. The status of development in specific application areas is also reviewed. One example is the formation of antihydrogen atoms for fundamental physics [e.g., tests of invariance under charge conjugation, parity inversion, and time reversal (the CPT theorem), and studies of the interaction of gravity with antimatter]. Other applications discussed include atomic and materials physics studies and the study of the electron-positron many-body system, including both classical electron-positron plasmas and the complementary quantum system in the form of Bose-condensed gases of positronium atoms. Areas of future promise are also discussed. The review concludes with a brief summary and a list of outstanding challenges.

  17. A Mini Linac Based Positron Source at CEA-Saclay

    SciTech Connect

    Debu, P.; Perez, P.; Rey, J.-M.; Sacquin, Y.; Blideanu, V.; Curtoni, A.; Delferriere, O.; Dupre, P.; Muranaka, T.; Ruiz, N.

    2009-09-02

    We are installing at CEA-Saclay a demonstration setup for an intense positron source. It is based on a compact 5.5 MeV electron linac used to produce positrons via pair production on a tungsten target. A relatively high current of 0.15 mA compensates for low positron efficiencies at low energy, which is below the neutron activation threshold. The expected production rate is 5centre dot10{sup 11} fast positrons per second. A set of coils is arranged to select the fast positrons from the diffracted electron beam in order to study the possibility of using a rare gas cryogenic moderator away from the main flux of particles. The commissioning of the linac is under way. This setup is part of a project to demonstrate the feasibility of an experiment to produce the H{sup +} ions for a free fall measurement of neutral antihydrogen (H). Its small size and cost could be of interest for material science applications, after adaptation of the time structure.

  18. Formation mechanisms and optimization of trap-based positron beams

    NASA Astrophysics Data System (ADS)

    Natisin, M. R.; Danielson, J. R.; Surko, C. M.

    2016-02-01

    Described here are simulations of pulsed, magnetically guided positron beams formed by ejection from Penning-Malmberg-style traps. In a previous paper [M. R. Natisin et al., Phys. Plasmas 22, 033501 (2015)], simulations were developed and used to describe the operation of an existing trap-based beam system and provided good agreement with experimental measurements. These techniques are used here to study the processes underlying beam formation in more detail and under more general conditions, therefore further optimizing system design. The focus is on low-energy beams (˜eV) with the lowest possible spread in energies (<10 meV), while maintaining microsecond pulse durations. The simulations begin with positrons trapped within a potential well and subsequently ejected by raising the bottom of the trapping well, forcing the particles over an end-gate potential barrier. Under typical conditions, the beam formation process is intrinsically dynamical, with the positron dynamics near the well lip, just before ejection, particularly crucial to setting beam quality. In addition to an investigation of the effects of beam formation on beam quality under typical conditions, two other regimes are discussed; one occurring at low positron temperatures in which significantly lower energy and temporal spreads may be obtained, and a second in cases where the positrons are ejected on time scales significantly faster than the axial bounce time, which results in the ejection process being essentially non-dynamical.

  19. Rank-based camera spectral sensitivity estimation.

    PubMed

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method. PMID:27140768

  20. Rank-based camera spectral sensitivity estimation.

    PubMed

    Finlayson, Graham; Darrodi, Maryam Mohammadzadeh; Mackiewicz, Michal

    2016-04-01

    In order to accurately predict a digital camera response to spectral stimuli, the spectral sensitivity functions of its sensor need to be known. These functions can be determined by direct measurement in the lab-a difficult and lengthy procedure-or through simple statistical inference. Statistical inference methods are based on the observation that when a camera responds linearly to spectral stimuli, the device spectral sensitivities are linearly related to the camera rgb response values, and so can be found through regression. However, for rendered images, such as the JPEG images taken by a mobile phone, this assumption of linearity is violated. Even small departures from linearity can negatively impact the accuracy of the recovered spectral sensitivities, when a regression method is used. In our work, we develop a novel camera spectral sensitivity estimation technique that can recover the linear device spectral sensitivities from linear images and the effective linear sensitivities from rendered images. According to our method, the rank order of a pair of responses imposes a constraint on the shape of the underlying spectral sensitivity curve (of the sensor). Technically, each rank-pair splits the space where the underlying sensor might lie in two parts (a feasible region and an infeasible region). By intersecting the feasible regions from all the ranked-pairs, we can find a feasible region of sensor space. Experiments demonstrate that using rank orders delivers equal estimation to the prior art. However, the Rank-based method delivers a step-change in estimation performance when the data is not linear and, for the first time, allows for the estimation of the effective sensitivities of devices that may not even have "raw mode." Experiments validate our method.

  1. Recent progress in tailoring trap-based positron beams

    SciTech Connect

    Natisin, M. R.; Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2013-03-19

    Recent progress is described to implement two approaches to specially tailor trap-based positron beams. Experiments and simulations are presented to understand the limits on the energy spread and pulse duration of positron beams extracted from a Penning-Malmberg (PM) trap after the particles have been buffer-gas cooled (or heated) in the range of temperatures 1000 {>=} T {>=} 300 K. These simulations are also used to predict beam performance for cryogenically cooled positrons. Experiments and simulations are also presented to understand the properties of beams formed when plasmas are tailored in a PM trap in a 5 tesla magnetic field, then non-adiabatically extracted from the field using a specially designed high-permeability grid to create a new class of electrostatically guided beams.

  2. First Test Of A New High Resolution Positron Camera With Four Area Detectors

    NASA Astrophysics Data System (ADS)

    van Laethem, E.; Kuijk, M.; Deconinck, Frank; van Miert, M.; Defrise, Michel; Townsend, D.; Wensveen, M.

    1989-10-01

    A PET camera consisting of two pairs of parallel area detectors has been installed at the cyclotron unit of VUB. The detectors are High Density Avalanche Chambers (HIDAC) wire-chambers with a stack of 4 or 6 lead gamma-electron converters, the sensitive area being 30 by 30 cm. The detectors are mounted on a commercial gantry allowing a 180 degree rotation during acquisition, as needed for a fully 3D image reconstruction. The camera has been interfaced to a token-ring computer network consisting of 5 workstations among which the various tasks (acquisition, reconstruction, display) can be distributed. Each coincident event is coded in 48 bits and is transmitted to the computer bus via a 512 kbytes dual ported buffer memory allowing data rates of up to 50 kHz. Fully 3D image reconstruction software has been developed, and includes new reconstruction algorithms allowing a better utilization of the available projection data. Preliminary measurements and imaging of phantoms and small animals (with 18FDG) have been performed with two of the four detectors mounted on the gantry. They indicate the expected 3D isotropic spatial resolution of 3.5 mm (FWHM, line source in air) and a sensitivity of 4 cps/μCi for a centred point source in air, corresponding to typical data rates of a few kHz. This latter figure is expected to improve by a factor of 4 after coupling of the second detector pair, since the coincidence sensitivity of this second detector pair is a factor 3 higher than that of the first one.

  3. Zoom camera based on liquid lenses

    NASA Astrophysics Data System (ADS)

    Kuiper, S.; Hendriks, B. H. W.; Suijver, J. F.; Deladi, S.; Helwegen, I.

    2007-01-01

    A 1.7× VGA zoom camera was designed based on two variable-focus liquid lenses and three plastic lenses. The strongly varying curvature of the liquid/liquid interface in the lens makes an achromatic design complicated. Special liquids with a rare combination of refractive index and Abbe number are required to prevent chromatic aberrations for all zoom levels and object positions. A set of acceptable liquids was obtained and used in a prototype that was constructed according to our design. First photos taken with the prototype show a proof of principle.

  4. Exploring positron characteristics utilizing two new positron-electron correlation schemes based on multiple electronic structure calculation methods

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Gu, Bing-Chuan; Han, Xiao-Xi; Liu, Jian-Dang; Ye, Bang-Jiao

    2015-10-01

    We make a gradient correction to a new local density approximation form of positron-electron correlation. The positron lifetimes and affinities are then probed by using these two approximation forms based on three electronic-structure calculation methods, including the full-potential linearized augmented plane wave (FLAPW) plus local orbitals approach, the atomic superposition (ATSUP) approach, and the projector augmented wave (PAW) approach. The differences between calculated lifetimes using the FLAPW and ATSUP methods are clearly interpreted in the view of positron and electron transfers. We further find that a well-implemented PAW method can give near-perfect agreement on both the positron lifetimes and affinities with the FLAPW method, and the competitiveness of the ATSUP method against the FLAPW/PAW method is reduced within the best calculations. By comparing with the experimental data, the new introduced gradient corrected correlation form is proved to be competitive for positron lifetime and affinity calculations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  5. A new scheme to accumulate positrons in a Penning-Malmberg trap with a linac-based positron pulsed source

    SciTech Connect

    Dupre, P.

    2013-03-19

    The Gravitational Behaviour of Antimatter at Rest experiment (GBAR) is designed to perform a direct measurement of the weak equivalence principle on antimatter by measuring the acceleration of anti-hydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium (Ps) cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. The Ps target will be produced by a pulse of few 10{sup 10} positrons injected onto a positron-positronium converter. For this purpose, a slow positron source using an electron Linac has been constructed at Saclay. The present flux is comparable with that of {sup 22}Na-based sources using solid neon moderator. A new positron accumulation scheme with a Penning-Malmberg trap has been proposed taking advantage of the pulsed time structure of the beam. In the trap, the positrons are cooled by interaction with a dense electron plasma. The overall trapping efficiency has been estimated to be {approx}70% by numerical simulations.

  6. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-08-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other together with a short-lived one are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 200 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  7. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-12-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other, together with a short-lived one, are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 230 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails, suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  8. A Semiconductor-Based Positron Emission Tomography System

    NASA Astrophysics Data System (ADS)

    Oxley, D. C.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Grint, A. N.; Harkness, L. J.; Jones, M.; Judson, D. S.; Nolan, P. J.; Slee, M.; Unsworth, C.; Lazarus, I. H.

    2009-12-01

    This paper shall summarize the research conducted employing the high-purity germanium based small animal imaging system, SmartPET (SMall Animal Reconstructive Tomograph for Positron Emission Tomography). Geant4 simulations of the experimental setup were carried out in order to derive novel analysis procedures and quantify the system limitations. In this paper, we will focus on a gamma ray tracking approach devised to overcome germanium's high Compton scattering cross-section and on imaging challenging and complex phantom geometries. The potential of the developed tools and of the system itself will be discussed.

  9. Design analysis and performance evaluation of a two-dimensional camera for accelerated positron-emitter beam injection by computer simulation

    SciTech Connect

    Llacer, J.; Chatterjee, A.; Batho, E.K.; Poskanzer, J.A.

    1982-05-01

    The characteristics and design of a high-accuracy and high-sensitivity 2-dimensional camera for the measurement of the end-point of the trajectory of accelerated heavy ion beams of positron emitter isotopes are described. Computer simulation methods have been used in order to insure that the design would meet the demanding criteria of ability to obtain the location of the centroid of a point source in the X-Y plane with errors smaller than 1 mm, with an activity of 100 nanoCi, in a counting time of 5 sec or less. A computer program which can be developed into a general purpose analysis tool for a large number of positron emitter camera configurations is described in its essential parts. The validation of basic simulation results with simple measurements is reported, and the use of the program to generate simulated images which include important second order effects due to detector material, geometry, septa, etc. is demonstrated. Comparison between simulated images and initial results with the completed instrument shows that the desired specifications have been met.

  10. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  11. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    NASA Astrophysics Data System (ADS)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  12. Spectral Camera based on Ghost Imaging via Sparsity Constraints.

    PubMed

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-16

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  13. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  14. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  15. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  16. Cluster-based distributed face tracking in camera networks.

    PubMed

    Yoder, Josiah; Medeiros, Henry; Park, Johnny; Kak, Avinash C

    2010-10-01

    In this paper, we present a distributed multicamera face tracking system suitable for large wired camera networks. Unlike previous multicamera face tracking systems, our system does not require a central server to coordinate the entire tracking effort. Instead, an efficient camera clustering protocol is used to dynamically form groups of cameras for in-network tracking of individual faces. The clustering protocol includes cluster propagation mechanisms that allow the computational load of face tracking to be transferred to different cameras as the target objects move. Furthermore, the dynamic election of cluster leaders provides robustness against system failures. Our experimental results show that our cluster-based distributed face tracker is capable of accurately tracking multiple faces in real-time. The overall performance of the distributed system is comparable to that of a centralized face tracker, while presenting the advantages of scalability and robustness. PMID:20423804

  17. A cooperative control algorithm for camera based observational systems.

    SciTech Connect

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  18. An Undulator Based Polarized Positron Source for CLIC

    SciTech Connect

    Liu, Wanming; Gai, Wei; Rinolfi, Louis; Sheppard, John; /SLAC

    2012-07-02

    A viable positron source scheme is proposed that uses circularly polarized gamma rays generated from the main 250 GeV electron beam. The beam passes through a helical superconducting undulator with a magnetic field of {approx} 1 Tesla and a period of 1.15 cm. The gamma-rays produced in the undulator in the energy range between {approx} 3 MeV - 100 MeV will be directed to a titanium target and produce polarized positrons. The positrons are then captured, accelerated and transported to a Pre-Damping Ring (PDR). Detailed parameter studies of this scheme including positron yield, and undulator parameter dependence are presented. Effects on the 250 GeV CLIC main beam, including emittance growth and energy loss from the beam passing through the undulator are also discussed.

  19. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  20. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  1. Design of microcontroller based system for automation of streak camera

    SciTech Connect

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  2. Observation of Polarized Positrons from an Undulator-Based Source

    SciTech Connect

    Alexander, G; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K.T.; Mikhailichenko, A.A.; Moortgat-Pick, G.A.; Pahl, P.; /Tel Aviv U. /Cornell U., Phys. Dept. /SLAC /Tennessee U. /Humboldt U., Berlin /DESY /Yerevan Phys. Inst. /Aachen, Tech. Hochsch. /DESY, Zeuthen /Princeton U. /Durham U. /Daresbury

    2008-03-06

    An experiment (E166) at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with Geant4 simulations that include the dominant polarization-dependent interactions of electrons, positrons and photons in matter.

  3. Extrinsic Calibration of Camera Networks Based on Pedestrians.

    PubMed

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  4. Extrinsic Calibration of Camera Networks Based on Pedestrians.

    PubMed

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-05-09

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.

  5. [Radiometric calibration of LCTF-based multispectral area CCD camera].

    PubMed

    Du, Li-Li; Yi, Wei-Ning; Zhang, Dong-Ying; Huang, Hong-Lian; Qiao, Yan-Li; Zhang, Xie

    2011-01-01

    Multispectral area CCD camera based on liquid crystal tunable filter (LCTF) is a new spectral imaging system, which could record image of one wavelength on the area CCD by utilizing electrically controlled birefringence of liquid-crystal and interference principle of polarized light. Because of the special working principle of LCTF and frame transfer area CCD, the existing radiometric calibration method can not meet the precision need of remote sensing application if it is used for LCTF-camera. An improved radiometric calibration method is proposed, in which the camera performance test and calibration experiment are carried out relying on the devices of integrating sphere and standard detector, and the absolute calibration coefficient is calculated via correcting frame transfer smear and improving data process algorithm. Then the validity of the laboratory calibration coefficient is checked by a field validation experiment. Experimental result indicates that the calibration coefficient is valid, and the radiation information on the ground could be accurately inverted from the calibrated image data. With the resolution of radiometric calibration of LCTF-camera and the improvement of calibration precision, the application field of the image data acquired by the camera would be extended effectively.

  6. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  7. Performance of the (n,γ)-Based Positron Beam Facility NEPOMUC

    NASA Astrophysics Data System (ADS)

    Schreckenbach, K.; Hugenschmidt, C.; Löwe, B.; Maier, J.; Pikart, P.; Piochacz, C.; Stadlbauer, M.

    2009-01-01

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.108 e+/s is achieved (1 keV). The performance and applications of the facility are presented.

  8. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  9. A Robust Camera-Based Interface for Mobile Entertainment.

    PubMed

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-02-19

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

  10. The NLC positron source

    SciTech Connect

    Tang, H.; Kulikov, A.V.; Clendenin, J.E.; Ecklund, S.D.; Miller, R.A.

    1995-05-01

    A baseline design for the NLC positron source based on the existing SLC positron system is described. The proposed NLC source consists of a dedicated S-band electron accelerator, a conventional positron production and capture system utilizing a high Z target and an adiabatic matching device, and an L-band positron linac. The invariant transverse acceptance of the capture system is 0.06 m{center_dot}rad, ensuring an adequate positron beam intensity for the NLC.

  11. Range camera self-calibration based on integrated bundle adjustment via joint setup with a 2D digital camera.

    PubMed

    Shahbazi, Mozhdeh; Homayouni, Saeid; Saadatseresht, Mohammad; Sattari, Mehran

    2011-01-01

    Time-of-flight cameras, based on photonic mixer device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera. PMID:22164102

  12. Range camera self-calibration based on integrated bundle adjustment via joint setup with a 2D digital camera.

    PubMed

    Shahbazi, Mozhdeh; Homayouni, Saeid; Saadatseresht, Mohammad; Sattari, Mehran

    2011-01-01

    Time-of-flight cameras, based on photonic mixer device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera.

  13. Range Camera Self-Calibration Based on Integrated Bundle Adjustment via Joint Setup with a 2D Digital Camera

    PubMed Central

    Shahbazi, Mozhdeh; Homayouni, Saeid; Saadatseresht, Mohammad; Sattari, Mehran

    2011-01-01

    Time-of-flight cameras, based on Photonic Mixer Device (PMD) technology, are capable of measuring distances to objects at high frame rates, however, the measured ranges and the intensity data contain systematic errors that need to be corrected. In this paper, a new integrated range camera self-calibration method via joint setup with a digital (RGB) camera is presented. This method can simultaneously estimate the systematic range error parameters as well as the interior and external orientation parameters of the camera. The calibration approach is based on photogrammetric bundle adjustment of observation equations originating from collinearity condition and a range errors model. Addition of a digital camera to the calibration process overcomes the limitations of small field of view and low pixel resolution of the range camera. The tests are performed on a dataset captured by a PMD[vision]-O3 camera from a multi-resolution test field of high contrast targets. An average improvement of 83% in RMS of range error and 72% in RMS of coordinate residual, over that achieved with basic calibration, was realized in an independent accuracy assessment. Our proposed calibration method also achieved 25% and 36% improvement on RMS of range error and coordinate residual, respectively, over that obtained by integrated calibration of the single PMD camera. PMID:22164102

  14. Analysis of unstructured video based on camera motion

    NASA Astrophysics Data System (ADS)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  15. Camera-based independent couch height verification in radiation oncology.

    PubMed

    Kusters, Martijn; Louwe, Rob; Biemans-van Kastel, Liesbeth; Nieuwenkamp, Henk; Zahradnik, Rien; Claessen, Roy; van Seters, Ronald; Huizenga, Henk

    2015-01-01

    For specific radiation therapy (RT) treatments, it is advantageous to use the isocenter-to-couch distance (ICD) for initial patient setup.(1) Since sagging of the treatment couch is not properly taken into account by the electronic readout of the treatment machine, this readout cannot be used for initial patient positioning using the isocenter-to-couch distance (ICD). Therefore, initial patient positioning to the prescribed ICD has been carried out using a ruler prior to each treatment fraction in our institution. However, the ruler method is laborious and logging of data is not possible. The objective of this study is to replace the ruler-based setup of the couch height with an independent, user-friendly, optical camera-based method whereby the radiation technologists have to move only the couch to the correct couch height, which is visible on a display. A camera-based independent couch height measurement system (ICHS) was developed in cooperation with Panasonic Electric Works Western Europe. Clinical data showed that the ICHS is at least as accurate as the application of a ruler to verify the ICD. The camera-based independent couch height measurement system has been successfully implemented in seven treatment rooms, since 10 September 2012. The benefits of this system are a more streamlined workflow, reduction of human errors during initial patient setup, and logging of the actual couch height at the isocenter. Daily QA shows that the systems are stable and operate within the set 1 mm tolerance. Regular QA of the system is necessary to guarantee that the system works correctly. PMID:26699308

  16. Formation of buffer-gas-trap based positron beams

    SciTech Connect

    Natisin, M. R. Danielson, J. R. Surko, C. M.

    2015-03-15

    Presented here are experimental measurements, analytic expressions, and simulation results for pulsed, magnetically guided positron beams formed using a Penning-Malmberg style buffer gas trap. In the relevant limit, particle motion can be separated into motion along the magnetic field and gyro-motion in the plane perpendicular to the field. Analytic expressions are developed which describe the evolution of the beam energy distributions, both parallel and perpendicular to the magnetic field, as the beam propagates through regions of varying magnetic field. Simulations of the beam formation process are presented, with the parameters chosen to accurately replicate experimental conditions. The initial conditions and ejection parameters are varied systematically in both experiment and simulation, allowing the relevant processes involved in beam formation to be explored. These studies provide new insights into the underlying physics, including significant adiabatic cooling, due to the time-dependent beam-formation potential. Methods to improve the beam energy and temporal resolution are discussed.

  17. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  18. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  19. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    NASA Astrophysics Data System (ADS)

    Muranaka, T.; Debu, P.; Dupré, P.; Liszkay, L.; Mansoulie, B.; Pérez, P.; Rey, J. M.; Ruiz, N.; Sacquin, Y.; Crivelli, P.; Gendotti, U.; Rubbia, A.

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5·1011 per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  20. A trap-based pulsed positron beam optimised for positronium laser spectroscopy

    SciTech Connect

    Cooper, B. S. Alonso, A. M.; Deller, A.; Wall, T. E.; Cassidy, D. B.

    2015-10-15

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10{sup 5} positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10{sup 7} cm{sup −3} is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields.

  1. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  2. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  3. Noninvasive particle sizing using camera-based diffuse reflectance spectroscopy.

    PubMed

    Abildgaard, Otto Højager Attermann; Frisvad, Jeppe Revall; Falster, Viggo; Parker, Alan; Christensen, Niels Jørgen; Dahl, Anders Bjorholm; Larsen, Rasmus

    2016-05-10

    Diffuse reflectance measurements are useful for noninvasive inspection of optical properties such as reduced scattering and absorption coefficients. Spectroscopic analysis of these optical properties can be used for particle sizing. Systems based on optical fiber probes are commonly employed, but their low spatial resolution limits their validity ranges for the coefficients. To cover a wider range of coefficients, we use camera-based spectroscopic oblique incidence reflectometry. We develop a noninvasive technique for acquisition of apparent particle size distributions based on this approach. Our technique is validated using stable oil-in-water emulsions with a wide range of known particle size distributions. We also measure the apparent particle size distributions of complex dairy products. These results show that our tool, in contrast to those based on fiber probes, can deal with a range of optical properties wide enough to track apparent particle size distributions in a typical industrial process. PMID:27168301

  4. Camera-based forecasting of insolation for solar systems

    NASA Astrophysics Data System (ADS)

    Manger, Daniel; Pagel, Frank

    2015-02-01

    With the transition towards renewable energies, electricity suppliers are faced with huge challenges. Especially the increasing integration of solar power systems into the grid gets more and more complicated because of their dynamic feed-in capacity. To assist the stabilization of the grid, the feed-in capacity of a solar power system within the next hours, minutes and even seconds should be known in advance. In this work, we present a consumer camera-based system for forecasting the feed-in capacity of a solar system for a horizon of 10 seconds. A camera is targeted at the sky and clouds are segmented, detected and tracked. A quantitative prediction of the insolation is performed based on the tracked clouds. Image data as well as truth data for the feed-in capacity was synchronously collected at one Hz using a small solar panel, a resistor and a measuring device. Preliminary results demonstrate both the applicability and the limits of the proposed system.

  5. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, C.‰line; Gosselin, Bernard

    2005-01-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  6. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, Céline; Gosselin, Bernard

    2004-12-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  7. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  8. Whole blood glucose analysis based on smartphone camera module.

    PubMed

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110–586 mg∕dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis. PMID:26524683

  9. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure. PMID:20876019

  10. Whole blood glucose analysis based on smartphone camera module

    NASA Astrophysics Data System (ADS)

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110-586 mg/dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis.

  11. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  12. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  13. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  14. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  15. NIR spectrophotometric system based on a conventional CCD camera

    NASA Astrophysics Data System (ADS)

    Vilaseca, Meritxell; Pujol, Jaume; Arjona, Montserrat

    2003-05-01

    The near infrared spectral region (NIR) is useful in many applications. These include agriculture, the food and chemical industry, and textile and medical applications. In this region, spectral reflectance measurements are currently made with conventional spectrophotometers. These instruments are expensive since they use a diffraction grating to obtain monochromatic light. In this work, we present a multispectral imaging based technique for obtaining the reflectance spectra of samples in the NIR region (800 - 1000 nm), using a small number of measurements taken through different channels of a conventional CCD camera. We used methods based on the Wiener estimation, non-linear methods and principal component analysis (PCA) to reconstruct the spectral reflectance. We also analyzed, by numerical simulation, the number and shape of the filters that need to be used in order to obtain good spectral reconstructions. We obtained the reflectance spectra of a set of 30 spectral curves using a minimum of 2 and a maximum of 6 filters under the influence of two different halogen lamps with color temperatures Tc1 = 2852K and Tc2 = 3371K. The results obtained show that using between three and five filters with a large spectral bandwidth (FWHM = 60 nm), the reconstructed spectral reflectance of the samples was very similar to that of the original spectrum. The small amount of errors in the spectral reconstruction shows the potential of this method for reconstructing spectral reflectances in the NIR range.

  16. Online self-camera orientation based on laser metrology and computer algorithms

    NASA Astrophysics Data System (ADS)

    Rodríguez, J. Apolinar Muñoz

    2011-12-01

    An online self-camera orientation for mobile vision is presented. In this technique, the camera orientation is determined during the vision task. This procedure is carried out by Bezier networks of a laser line. Here, the camera orientation is calibrated when the camera is turned during the vision task. Also, the networks perform the three-dimensional vision. The network structure is built based on the behavior of the line shifting, which is provided by the surface depth. From this structure, the initial calibration and the online self-camera orientation are deduced. The proposed technique avoids calibrated references and physical measurements, which are used in the traditional calibration of camera orientation. Thus, calibration limitations caused by camera orientation modifications are overcome to perform the three-dimensional vision. Therefore, the proposed self-camera orientation improves the accuracy and performance of the mobile vision. It is because online data of calibrated references are not passed to the vision system. This procedure represents a contribution in the field of the calibration of camera orientation. To elucidate this contribution, an evaluation is performed based on the reported methods of self-calibration of camera orientation. Also, the time processing is described.

  17. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  18. A bionic camera-based polarization navigation sensor.

    PubMed

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-07-21

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode.

  19. Wireless capsule endoscopy video reduction based on camera motion estimation.

    PubMed

    Liu, Hong; Pan, Ning; Lu, Heng; Song, Enmin; Wang, Qian; Hung, Chih-Cheng

    2013-04-01

    Wireless capsule endoscopy (WCE) is a novel technology aiming for investigating the diseases and abnormalities in small intestine. The major drawback of WCE examination is that it takes a long time to examine the whole WCE video. In this paper, we present a new reduction scheme for WCE video to reduce the examination time. To achieve this task, a WCE video motion model is proposed. Under this motion model, the WCE imaging motion is estimated in two stages (the coarse level and the fine level). In the coarse level, the WCE camera motion is estimated with a combination of Bee Algorithm and Mutual Information. In the fine level, the local gastrointestinal tract motion is estimated with SIFT flow. Based on the result of WCE imaging motion estimation, the reduction scheme preserves key images in WCE video with scene changes. From experimental results, we notice that the proposed motion model is suitable for the motion estimation in successive WCE images. Through the comparison with APRS and FCM-NMF scheme, our scheme can produce an acceptable reduction sequence for browsing and examination. PMID:22868484

  20. Wireless capsule endoscopy video reduction based on camera motion estimation.

    PubMed

    Liu, Hong; Pan, Ning; Lu, Heng; Song, Enmin; Wang, Qian; Hung, Chih-Cheng

    2013-04-01

    Wireless capsule endoscopy (WCE) is a novel technology aiming for investigating the diseases and abnormalities in small intestine. The major drawback of WCE examination is that it takes a long time to examine the whole WCE video. In this paper, we present a new reduction scheme for WCE video to reduce the examination time. To achieve this task, a WCE video motion model is proposed. Under this motion model, the WCE imaging motion is estimated in two stages (the coarse level and the fine level). In the coarse level, the WCE camera motion is estimated with a combination of Bee Algorithm and Mutual Information. In the fine level, the local gastrointestinal tract motion is estimated with SIFT flow. Based on the result of WCE imaging motion estimation, the reduction scheme preserves key images in WCE video with scene changes. From experimental results, we notice that the proposed motion model is suitable for the motion estimation in successive WCE images. Through the comparison with APRS and FCM-NMF scheme, our scheme can produce an acceptable reduction sequence for browsing and examination.

  1. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  2. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  3. Microstructure Evaluation of Fe-BASED Amorphous Alloys Investigated by Doppler Broadening Positron Annihilation Technique

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Huang, Ping; Wang, Yuxin; Yan, Biao

    2013-07-01

    Microstructure of Fe-based amorphous and nanocrystalline soft magnetic alloy has been investigated by X-ray diffraction (XRD), transmission electronic microscopy (TEM) and Doppler broadening positron annihilation technique (PAT). Doppler broadening measurement reveals that amorphous alloys (Finemet, Type I) which can form a nanocrystalline phase have more defects (free volume) than alloys (Metglas, Type II) which cannot form this microstructure. XRD and TEM characterization indicates that the nanocrystallization of amorphous Finemet alloy occurs at 460°C, where nanocrystallites of α-Fe with an average grain size of a few nanometers are formed in an amorphous matrix. With increasing annealing temperature up to 500°C, the average grain size increases up to around 12 nm. During the annealing of Finemet alloy, it has been demonstrated that positron annihilates in quenched-in defect, crystalline nanophase and amorphous-nanocrystalline interfaces. The change of line shape parameter S with annealing temperature in Finemet alloy is mainly due to the structural relaxation, the pre-nucleation of Cu nucleus and the nanocrystallization of α-Fe(Si) phase during annealing. This study throws new insights into positron behavior in the nanocrystallization of metallic glasses, especially in the presence of single or multiple nanophases embedded in the amorphous matrix.

  4. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  5. Ultra fast x-ray streak camera for ten inch manipulator based platforms.

    PubMed

    Marley, E V; Shepherd, R; Fulkerson, S; James, L; Emig, J; Norman, D

    2012-10-01

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The Lawrence Livermore National Laboratory ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  6. An aerial composite imaging method with multiple upright cameras based on axis-shift theory

    NASA Astrophysics Data System (ADS)

    Fang, Junyong; Liu, Xue; Xue, Yongqi; Tong, Qingxi

    2010-11-01

    Several composite camera systems were made for wide coverage by using 3 or 4 oblique cameras. A virtual projecting center and image was used for geometrical correction and mosaic with different projecting angles and different spatial resolutions caused by oblique cameras. An imaging method based axis-shift theory is proposed to acquire wide coverage images by several upright cameras. Four upright camera lenses have the same wide angle of view. The optic axis of lens is not on the center of CCD, and each CCD in each camera covers only one part of the whole focus plane. Oblique deformation caused by oblique camera would be avoided by this axis-shift imaging method. The principle and parameters are given and discussed. A prototype camera system is constructed by common DLSR (digital single lens reflex) cameras. The angle of view could exceed 80 degrees along the flight direction when the focal length is 24mm, and the ratio of base line to height could exceed 0.7 when longitudinal overlap is 60%. Some original and mosaic images captured by this prototype system in some ground and airborne experiments are given at last. Experimental results of image test show that the upright imaging method can effectively avoid the oblique deformation and meet the geometrical precision of image mosaic.

  7. Thematic investigations in France based on metric camera imagery

    NASA Astrophysics Data System (ADS)

    Lecordix, P. Y.

    1985-04-01

    Spacelab metric camera images were used to study geological features, land use, and forestry, and were compared with other data sources used in cartography. For geological surveys, the metric camera is comparable to SPOT satellite, and better than LANDSAT. For land use, Spacelab images are unsatisfactory in urban areas; woodland and scrub is over-represented due to shadow effects and inclusion of water covered with aquatic plants; forest distribution is well reproduced; sandy features are well identified. For forest inventories, results are surprisingly good, e.g., only 4% error in distinguishing resinous and leafy trees.

  8. The E166 experiment: Development of an Undulator-Based Polarized Positron Source for the International Linear Collider

    SciTech Connect

    Kovermann, J.; Stahl, A.; Mikhailichenko, A.A.; Scott, D.; Moortgat-Pick, G.A.; Gharibyan, V.; Pahl, P.; Poschl, R.; Schuler, K.P.; Laihem, K.; Riemann, S.; Schalicke, A.; Dollan, R.; Kolanoski, H.; Lohse, T.; Schweizer, T.; McDonald, K.T.; Batygin, Y.; Bharadwaj, V.; Bower, G.; Decker, F.J.; /SLAC /Tel Aviv U. /Tennessee U.

    2011-11-14

    A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized e{sup +} and e{sup -}. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of 3.4% and {approx}1% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from 50% to 90%. The full exploitation of the physics potential of an international linear collider (ILC) will require the development of polarized positron beams. Having both e{sup +} and e{sup -} beams polarized will provide new insight into structures of couplings and thus give access to physics beyond the standard model [1]. The concept for a polarized positron source is based on circularly polarized photon sources. These photons are then converted to longitudinally polarized e{sup +} and e{sup -} pairs. While in an experiment at KEK [1a], Compton backscattering is used [2], the E166 experiment uses a helical undulator to produce polarized photons. An undulator-based positron source for the ILC has been proposed in [3,4]. The proposed scheme for an ILC positron source is illustrated in figure 1. In this scheme, a 150 GeV electron beam passes through a 120 m long helical undulator to produce an intense photon beam with a high degree of circular polarization. These photons are converted in a thin target to e{sup +} e{sup -} pairs. The polarized positrons are then collected, pre-accelerated to the damping ring and injected to the main linac. The E166 experiment is

  9. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  10. [New medical imaging based on electron tracking Compton camera (ETCC)].

    PubMed

    Tanimori, Toru; Kubo, Hidetoshi; Kabuki, Shigeto; Kimura, Hiroyuki

    2012-01-01

    We have developed an Electron-Tracking Compton Camera (ETCC) for medical imaging due to its wide energy dynamic range (200-1,500keV) and wide field of view (FOV, 3 str). This camera has a potential of developing the new reagents. We have carried out several imaging reagent studies as examples; (1) 18F-FDG and 131I-MIBG simultaneous imaging for double clinical tracer imaging, (2) imaging of some minerals (Mn-54, Zn-65, Fe-59) in mouse and plants. In addition, ETCC has a potential of real-time monitoring of the Bragg peak location by imaging prompt gamma rays for the beam therapy. We carried out the water phantom experiment using 140MeV proton beam, and obtained the images of both 511 keV and high energy gamma rays (800-2,000keV). Here better correlation of the latter image to the Bragg peak has been observed. Another potential of ETCC is to reconstruct the 3D image using only one-head camera without rotations of both the target and camera. Good 3D images of the thyroid grant phantom and the mouse with tumor were observed. In order to advance those features to the practical use, we are improving the all components and then construct the multi-head ETCC system.

  11. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  12. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  13. Principal axis-based correspondence between multiple cameras for people tracking.

    PubMed

    Hu, Weiming; Hu, Min; Zhou, Xue; Tan, Tieniu; Lou, Jianguang; Maybank, Steve

    2006-04-01

    Visual surveillance using multiple cameras has attracted increasing interest in recent years. Correspondence between multiple cameras is one of the most important and basic problems which visual surveillance using multiple cameras brings. In this paper, we propose a simple and robust method, based on principal axes of people, to match people across multiple cameras. The correspondence likelihood reflecting the similarity of pairs of principal axes of people is constructed according to the relationship between "ground-points" of people detected in each camera view and the intersections of principal axes detected in different camera views and transformed to the same view. Our method has the following desirable properties: 1) Camera calibration is not needed. 2) Accurate motion detection and segmentation are less critical due to the robustness of the principal axis-based feature to noise. 3) Based on the fused data derived from correspondence results, positions of people in each camera view can be accurately located even when the people are partially occluded in all views. The experimental results on several real video sequences from outdoor environments have demonstrated the effectiveness, efficiency, and robustness of our method. PMID:16566515

  14. Inspection focus technology of space tridimensional mapping camera based on astigmatic method

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Zhang, Liping

    2010-10-01

    The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 μm.

  15. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  16. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  17. Development of a treatment planning system for BNCT based on positron emission tomography data: preliminary results

    NASA Astrophysics Data System (ADS)

    Cerullo, N.; Daquino, G. G.; Muzi, L.; Esposito, J.

    2004-01-01

    Present standard treatment planning (TP) for glioblastoma multiforme (GBM - a kind of brain tumor), used in all boron neutron capture therapy (BNCT) trials, requires the construction (based on CT and/or MRI images) of a 3D model of the patient head, in which several regions, corresponding to different anatomical structures, are identified. The model is then employed by a computer code to simulate radiation transport in human tissues. The assumption is always made that considering a single value of boron concentration for each specific region will not lead to significant errors in dose computation. The concentration values are estimated "indirectly", on the basis of previous experience and blood sample analysis. This paper describes an original approach, with the introduction of data on the in vivo boron distribution, acquired by a positron emission tomography (PET) scan after labeling the BPA (borono-phenylalanine) with the positron emitter 18F. The feasibility of this approach was first tested with good results using the code CARONTE. Now a complete TPS is under development. The main features of the first version of this code are described and the results of a preliminary study are presented. Significant differences in dose computation arise when the two different approaches ("standard" and "PET-based") are applied to the TP of the same GBM case.

  18. Design of an infrared camera based aircraft detection system for laser guide star installations

    SciTech Connect

    Friedman, H.; Macintosh, B.

    1996-03-05

    There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.

  19. Study of CT-based positron range correction in high resolution 3D PET imaging

    NASA Astrophysics Data System (ADS)

    Cal-González, J.; Herraiz, J. L.; España, S.; Vicente, E.; Herranz, E.; Desco, M.; Vaquero, J. J.; Udías, J. M.

    2011-08-01

    Positron range limits the spatial resolution of PET images and has a different effect for different isotopes and positron propagation materials. Therefore it is important to consider it during image reconstruction, in order to obtain optimal image quality. Positron range distributions for most common isotopes used in PET in different materials were computed using the Monte Carlo simulations with PeneloPET. The range profiles were introduced into the 3D OSEM image reconstruction software FIRST and employed to blur the image either in the forward projection or in the forward and backward projection. The blurring introduced takes into account the different materials in which the positron propagates. Information on these materials may be obtained, for instance, from a segmentation of a CT image. The results of introducing positron blurring in both forward and backward projection operations was compared to using it only during forward projection. Further, the effect of different shapes of positron range profile in the quality of the reconstructed images with positron range correction was studied. For high positron energy isotopes, the reconstructed images show significant improvement in spatial resolution when positron range is taken into account during reconstruction, compared to reconstructions without positron range modeling.

  20. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  1. Optical system based on a CCD camera for ethanol detection

    NASA Astrophysics Data System (ADS)

    Martínez-Hipatl, C.; Muñoz-Aguirre, S.; Muñoz-Guerrero, R.; Castillo-Mixcóatl, J.; Beltrán-Pérez, G.; Gutiérrez-Salgado, J. M.

    2013-10-01

    This work reports the optimization of an optical system used to detect and quantify volatile organic compounds (VOC). The sensor consisted of a polydimethylsiloxane (PDMS) sensing film deposited on a glass substrate by the spin-coating technique. The PDMS has the property of swelling and/or changing its refractive index when it interacts with molecules of VOC in vapor phase. In order to measure the PDMS swelling, a charge-coupled device (CCD) camera was employed to evaluate the interference fringe shift in a Pohl interferometric arrangement. With this approach, it is possible to use each pixel of the CCD camera as a single photodetector in the arrangement. Similarly, different computer algorithms were developed in order to acquire and process the obtained data. The improvements in the system allowed the acquisition and plot of 1 datum per second. The steady-state responses of the PDMS sensors in the presence of ethanol vapor were analyzed. The obtained results showed that noise level was reduced approximately three times after performing data processing.

  2. Positron microprobe at LLNL

    SciTech Connect

    Asoka, P; Howell, R; Stoeffl, W

    1998-11-01

    The electron linac based positron source at Lawrence Livermore National Laboratory (LLNL) provides the world's highest current beam of keV positrons. We are building a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with sub-micron resolution. The widely spaced and intense positron packets from the tungsten moderator at the end of the 100 MeV LLNL linac are captured and trapped in a magnetic bottle. The positrons are then released in 1 ns bunches at a 20 MHz repetition rate. With a three-stage re-moderation we will compress the cm-sized original beam to a 1 micro-meter diameter final spot on the target. The buncher will compress the arrival time of positrons on the target to less than 100 ps. A detector array with up to 60 BaF2 crystals in paired coincidence will measure the annihilation radiation with high efficiency and low background. The energy of the positrons can be varied from less than 1 keV up to 50 keV.

  3. A descriptive geometry based method for total and common cameras fields of view optimization

    NASA Astrophysics Data System (ADS)

    Salmane, H.; Ruichek, Y.; Khoudour, L.

    2011-07-01

    The presented work is conducted in the framework of the ANR-VTT PANsafer project (Towards a safer level crossing). One of the objectives of the project is to develop a video surveillance system that will be able to detect and recognize potential dangerous situation around level crossings. This paper addresses the problem of cameras positioning and orientation in order to view optimally monitored scenes. In general, adjusting cameras position and orientation is achieved experimentally and empirically by considering geometrical different configurations. This step requires a lot of time to adjust approximately the total and common fields of view of the cameras, especially when constrained environments, like level crossing environments, are considered. In order to simplify this task and to get more precise cameras positioning and orientation, we propose in this paper a method that optimizes automatically the total and common cameras fields with respect to the desired scene. Based on descriptive geometry, the method estimates the best cameras position and orientation by optimizing surfaces of 2D domains that are obtained by projecting/intersecting the field of view of each camera on/with horizontal and vertical planes. The proposed method is evaluated and tested to demonstrate its effectiveness.

  4. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  5. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  6. Metric Potential of a 3D Measurement System Based on Digital Compact Cameras

    PubMed Central

    Sanz-Ablanedo, Enoc; Rodríguez-Pérez, José Ramón; Arias-Sánchez, Pedro; Armesto, Julia

    2009-01-01

    This paper presents an optical measuring system based on low cost, high resolution digital cameras. Once the cameras are synchronised, the portable and adjustable system can be used to observe living beings, bodies in motion, or deformations of very different sizes. Each of the cameras has been modelled individually and studied with regard to the photogrammetric potential of the system. We have investigated the photogrammetric precision obtained from the crossing of rays, the repeatability of results, and the accuracy of the coordinates obtained. Systematic and random errors are identified in validity assessment of the definition of the precision of the system from crossing of rays or from marking residuals in images. The results have clearly demonstrated the capability of a low-cost multiple-camera system to measure with sub-millimetre precision. PMID:22408520

  7. The Australian government's review of positron emission tomography: evidence-based policy-making in action.

    PubMed

    Ware, Robert E; Francis, Hilton W; Read, Kenneth E

    2004-06-21

    The Commonwealth Government constituted the Medicare Services Advisory Committee (MSAC) to implement its commitment to entrench the principles of evidence-based medicine in Australian clinical practice. With its recent review of positron emission tomography (PETReview), the Commonwealth intervened in an established MSAC process, and sanctioned the stated objective to restrict expenditure on the technology. In our opinion: The evaluation of evidence by PETReview was fundamentally compromised by a failure to meet the terms of reference, poor science, poor process and unique decision-making benchmarks. By accepting the recommendations of PETReview, the Commonwealth is propagating information which is not of the highest quality. The use of inferior-quality information for decision-making by doctors, patients and policy-makers is likely to harm rather than enhance healthcare outcomes. PMID:15200360

  8. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  9. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  10. Multi-camera calibration based on openCV and multi-view registration

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-ming; Wan, Xiong; Zhang, Zhi-min; Leng, Bi-yan; Lou, Ning-ning; He, Shuai

    2010-10-01

    For multi-camera calibration systems, a method based on OpenCV and multi-view registration combining calibration algorithm is proposed. First of all, using a Zhang's calibration plate (8X8 chessboard diagram) and a number of cameras (with three industrial-grade CCD) to be 9 group images shooting from different angles, using OpenCV to calibrate the parameters fast in the camera. Secondly, based on the corresponding relationship between each camera view, the computation of the rotation matrix and translation matrix is formulated as a constrained optimization problem. According to the Kuhn-Tucker theorem and the properties on the derivative of the matrix-valued function, the formulae of rotation matrix and translation matrix are deduced by using singular value decomposition algorithm. Afterwards an iterative method is utilized to get the entire coordinate transformation of pair-wise views, thus the precise multi-view registration can be conveniently achieved and then can get the relative positions in them(the camera outside the parameters).Experimental results show that the method is practical in multi-camera calibration .

  11. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    NASA Astrophysics Data System (ADS)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  12. A Compton camera application for the GAMOS GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Arce, P.; Judson, D. S.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Dormand, J.; Jones, M.; Nolan, P. J.; Sampson, J. A.; Scraggs, D. P.; Sweeney, A.; Lazarus, I.; Simpson, J.

    2012-04-01

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  13. Defining habitat covariates in camera-trap based occupancy studies.

    PubMed

    Niedballa, Jürgen; Sollmann, Rahel; bin Mohamed, Azlan; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10-500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations.

  14. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  15. Ultrashort megaelectronvolt positron beam generation based on laser-accelerated electrons

    NASA Astrophysics Data System (ADS)

    Xu, Tongjun; Shen, Baifei; Xu, Jiancai; Li, Shun; Yu, Yong; Li, Jinfeng; Lu, Xiaoming; Wang, Cheng; Wang, Xinliang; Liang, Xiaoyan; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan

    2016-03-01

    Experimental generation of ultrashort MeV positron beams with high intensity and high density using a compact laser-driven setup is reported. A high-density gas jet is employed experimentally to generate MeV electrons with high charge; thus, a charge-neutralized MeV positron beam with high density is obtained during laser-accelerated electrons irradiating high-Z solid targets. It is a novel electron-positron source for the study of laboratory astrophysics. Meanwhile, the MeV positron beam is pulsed with an ultrashort duration of tens of femtoseconds and has a high peak intensity of 7.8 × 1021 s-1, thus allows specific studies of fast kinetics in millimeter-thick materials with a high time resolution and exhibits potential for applications in positron annihilation spectroscopy.

  16. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  17. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels. PMID:27534480

  18. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  19. Hydrogenated amorphous silicon (a-Si:H) based gamma camera: Monte Carlo simulations

    SciTech Connect

    Lee, H.; Drewery, J.S.; Hong, W.S.; Jing, T.; Kaplan, S.N.; Mireshghi, A.; Perez-Mendez, V.

    1994-01-01

    A new gamma camera using a-Si:H photodetectors has been designed for the imaging of heart and other small organs. In this new design the photomultiplier tubes and the position sensing circuitry are replaced by 2-D array of a-Si:H p-i-n pixel photodetectors and readout circuitry which are built on a substrate. Without the photomultiplier tubes this camera is light weight, hence can be made portable. To predict the characteristics and the performance of this new gamma camera we did Monte Carlo simulations. In the simulations 128 {times} 128 imaging array of various pixel sizes were used. {sup 99m}Tc (140keV) and {sup 201}Tl(70keV) were used as radiation sources. From the simulations we could obtain the resolution of the camera and the overall system, and the blurring effects due to scattering in the phantom. Using the Wiener filter for image processing, restoration of the blurred image could be achieved. Simulation results of a-Si:H based gamma camera were compared with those of a conventional gamma camera.

  20. Automated ethernet-based test setup for long wave infrared camera analysis and algorithm evaluation

    NASA Astrophysics Data System (ADS)

    Edeler, Torsten; Ohliger, Kevin; Lawrenz, Sönke; Hussmann, Stephan

    2009-06-01

    In this paper we consider a new way for automated camera calibration and specification. The proposed setup is optimized for working with uncooled long wave infrared (thermal) cameras, while the concept itself is not restricted to those cameras. Every component of the setup like black body source, climate chamber, remote power switch, and the camera itself is connected to a network via Ethernet and a Windows XP workstation is controlling all components by the use of the TCL - script language. Beside the job of communicating with the components the script tool is also capable to run Matlab code via the matlab kernel. Data exchange during the measurement is possible and offers a variety of different advantages from drastically reduction of the amount of data to enormous speedup of the measuring procedure due to data analysis during measurement. A parameter based software framework is presented to create generic test cases, where modification to the test scenario does not require any programming skills. In the second part of the paper the measurement results of a self developed GigE-Vision thermal camera are presented and correction algorithms, providing high quality image output, are shown. These algorithms are fully implemented in the FPGA of the camera to provide real time processing while maintaining GigE-Vision as standard transmission protocol as an interface to arbitrary software tools. Artefacts taken into account are spatial noise, defective pixel and offset drift due to self heating after power on.

  1. SIFT-Based Indoor Localization for Older Adults Using Wearable Camera

    PubMed Central

    Zhang, Boxue; Zhao, Qi; Feng, Wenquan; Sun, Mingui; Jia, Wenyan

    2015-01-01

    This paper presents an image-based indoor localization system for tracking older individuals’ movement at home. In this system, images are acquired at a low frame rate by a miniature camera worn conveniently at the chest position. The correspondence between adjacent frames is first established by matching the SIFT (scale-invariant feature transform) based key points in a pair of images. The location changes of these points are then used to estimate the position of the wearer based on use of the pinhole camera model. A preliminary study conducted in an indoor environment indicates that the location of the wearer can be estimated with an adequate accuracy. PMID:26190909

  2. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  3. Minicyclotron-based technology for the production of positron-emitting labelled radiopharmaceuticals

    SciTech Connect

    Barrio, J.R.; Bida, G.; Satyamurthy, N.; Padgett, H.C.; MacDonald, N.S.; Phelps, M.E.

    1983-01-01

    The use of short-lived positron emitters such as carbon 11, fluorine 18, nitrogen 13, and oxygen 15, together with positron-emission tomography (PET) for probing the dynamics of physiological and biochemical processes in the normal and diseased states in man is presently an active area of research. One of the pivotal elements for the continued growth and success of PET is the routine delivery of the desired positron emitting labelled compounds. To date, the cyclotron remains the accelerator of choice for production of medically useful radionuclides. The development of the technology to bring the use of cyclotrons to a clinical setting is discussed. (ACR)

  4. Narrow Field-Of Visual Odometry Based on a Focused Plenoptic Camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2015-03-01

    In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic camera. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Furthermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image.

  5. Defocus compensation system of long focal aerial camera based on auto-collimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-ye; Zhao, Yu-liang; Xu, Zhao-lin

    2010-10-01

    Nowadays, novel aerial reconnaissance camera emphasizes on the shooting performance in high altitude or in long distance of oblique photography. In order to obtain the larger scale pictures which are easier for image interpretation, we need the camera has long focal length. But long focal length camera is easier to be influenced by environmental condition and lead to great change of lens' back focus which can result in the lens' resolution decreased greatly. So, we should do precise defocusing compensation to long focal aerial camera system. In order to realize defocusing compensation, a defocusing compensation system based on autocollimation is designed. Firstly, the reason which can lead to long focal camera's defocusing was discussed, then the factors such as changes of atmospheric pressure and temperature and oblique photographic distance were pointed out, and mathematical equation which could compute camera's defocusing amount was presented. Secondly, after camera's defocusing was analyzed, electro-optical autocollimation of higher automation and intelligent was adopted in the system. Before shooting , focal surface was located by electro-optical autocollimation focal detection mechanism, the data of airplane's height was imported through electronic control system. Defocusing amount was corrected by computing defocusing amount and the signal was send to focusing control motor. And an efficient improved mountain climb-searching algorithm was adopted for focal surface locating in the correction process. When confirming the direction of curve, the improved algorithm considered both twice focusing results and four points. If four points continue raised, the curve would be confirmed as rising direction. On the other hand, if four points continue decreased, the curve would be confirmed as decrease direction. In this way, we could avoid the local peak value appeared in two focusing steps. The defocusing compensation system consists of optical component and precise

  6. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    SciTech Connect

    Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  7. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    NASA Astrophysics Data System (ADS)

    Saha, Krishnendu; Straus, Kenneth J.; Chen, Yu.; Glick, Stephen J.

    2014-08-01

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  8. 18F-Labeled Silicon-Based Fluoride Acceptors: Potential Opportunities for Novel Positron Emitting Radiopharmaceuticals

    PubMed Central

    Bernard-Gauthier, Vadim; Wängler, Carmen; Wängler, Bjoern; Schirrmacher, Ralf

    2014-01-01

    Background. Over the recent years, radiopharmaceutical chemistry has experienced a wide variety of innovative pushes towards finding both novel and unconventional radiochemical methods to introduce fluorine-18 into radiotracers for positron emission tomography (PET). These “nonclassical” labeling methodologies based on silicon-, boron-, and aluminium-18F chemistry deviate from commonplace bonding of an [18F]fluorine atom (18F) to either an aliphatic or aromatic carbon atom. One method in particular, the silicon-fluoride-acceptor isotopic exchange (SiFA-IE) approach, invalidates a dogma in radiochemistry that has been widely accepted for many years: the inability to obtain radiopharmaceuticals of high specific activity (SA) via simple IE. Methodology. The most advantageous feature of IE labeling in general is that labeling precursor and labeled radiotracer are chemically identical, eliminating the need to separate the radiotracer from its precursor. SiFA-IE chemistry proceeds in dipolar aprotic solvents at room temperature and below, entirely avoiding the formation of radioactive side products during the IE. Scope of Review. A great plethora of different SiFA species have been reported in the literature ranging from small prosthetic groups and other compounds of low molecular weight to labeled peptides and most recently affibody molecules. Conclusions. The literature over the last years (from 2006 to 2014) shows unambiguously that SiFA-IE and other silicon-based fluoride acceptor strategies relying on 18F− leaving group substitutions have the potential to become a valuable addition to radiochemistry. PMID:25157357

  9. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography.

    PubMed

    Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  10. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features.

    PubMed

    Wu, Xiaolong; Wu, Sentang; Xing, Zhihui; Jia, Xiang

    2016-01-01

    This paper presents a global calibration method for widely distributed vision sensors in ring-topologies. Planar target with two mutually orthogonal groups of parallel lines is needed for each camera. Firstly, the relative pose of each camera and its corresponding target is found from the vanishing points and lines. Next, an auxiliary camera is used to find the relative poses between neighboring pairs of calibration targets. Then the relative pose from each target to the reference target is initialized by the chain of transformations, followed by nonlinear optimization based on the constraint of ring-topologies. Lastly, the relative poses between the cameras are found from the relative poses of calibration targets. Synthetic data, simulation images and real experiments all demonstrate that the proposed method is reliable and accurate. The accumulated error due to multiple coordinate transformations can be adjusted effectively by the proposed method. In real experiment, eight targets are located in an area about 1200 mm × 1200 mm. The accuracy of the proposed method is about 0.465 mm when the times of coordinate transformations reach a maximum. The proposed method is simple and can be applied to different camera configurations. PMID:27338386

  11. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features

    PubMed Central

    Wu, Xiaolong; Wu, Sentang; Xing, Zhihui; Jia, Xiang

    2016-01-01

    This paper presents a global calibration method for widely distributed vision sensors in ring-topologies. Planar target with two mutually orthogonal groups of parallel lines is needed for each camera. Firstly, the relative pose of each camera and its corresponding target is found from the vanishing points and lines. Next, an auxiliary camera is used to find the relative poses between neighboring pairs of calibration targets. Then the relative pose from each target to the reference target is initialized by the chain of transformations, followed by nonlinear optimization based on the constraint of ring-topologies. Lastly, the relative poses between the cameras are found from the relative poses of calibration targets. Synthetic data, simulation images and real experiments all demonstrate that the proposed method is reliable and accurate. The accumulated error due to multiple coordinate transformations can be adjusted effectively by the proposed method. In real experiment, eight targets are located in an area about 1200 mm × 1200 mm. The accuracy of the proposed method is about 0.465 mm when the times of coordinate transformations reach a maximum. The proposed method is simple and can be applied to different camera configurations. PMID:27338386

  12. Modeling and simulation of Positron Emission Mammography (PEM) based on double-sided CdTe strip detectors

    NASA Astrophysics Data System (ADS)

    Ozsahin, I.; Unlu, M. Z.

    2014-03-01

    Breast cancer is the most common leading cause of cancer death among women. Positron Emission Tomography (PET) Mammography, also known as Positron Emission Mammography (PEM), is a method for imaging primary breast cancer. Over the past few years, PEMs based on scintillation crystals dramatically increased their importance in diagnosis and treatment of early stage breast cancer. However, these detectors have significant limitations like poor energy resolution resulting with false-negative result (missed cancer), and false-positive result which leads to suspecting cancer and suggests an unnecessary biopsy. In this work, a PEM scanner based on CdTe strip detectors is simulated via the Monte Carlo method and evaluated in terms of its spatial resolution, sensitivity, and image quality. The spatial resolution is found to be ~ 1 mm in all three directions. The results also show that CdTe strip detectors based PEM scanner can produce high resolution images for early diagnosis of breast cancer.

  13. Broadband Sub-terahertz Camera Based on Photothermal Conversion and IR Thermography

    NASA Astrophysics Data System (ADS)

    Romano, M.; Chulkov, A.; Sommier, A.; Balageas, D.; Vavilov, V.; Batsale, J. C.; Pradere, C.

    2016-05-01

    This paper describes a fast sub-terahertz (THz) camera that is based on the use of a quantum infrared camera coupled with a photothermal converter, called a THz-to-Thermal Converter (TTC), thus allowing fast image acquisition. The performance of the experimental setup is presented and discussed, with an emphasis on the advantages of the proposed method for decreasing noise in raw data and increasing the image acquisition rate. A detectivity of 160 pW Hz-0.5 per pixel has been achieved, and some examples of the practical implementation of sub-THz imaging are given.

  14. A G-APD based Camera for Imaging Atmospheric Cherenkov Telescopes

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, S.; Commichau, V.; Dorner, D.; Gendotti, A.; Grimm, O.; von Gunten, H.; Hildebrand, D.; Horisberger, U.; Köhne, J.-H.; Krähenbühl, T.; Kranich, D.; Lorenz, E.; Lustermann, W.; Mannheim, K.; Neise, D.; Pauss, F.; Renker, D.; Rhode, W.; Rissi, M.; Ribordy, M.; Röser, U.; Stark, L. S.; Stucki, J.-P.; Tibolla, O.; Viertel, G.; Vogler, P.; Weitzel, Q.

    2011-02-01

    Imaging Atmospheric Cherenkov Telescopes (IACT) for Gamma-ray astronomy are presently using photomultiplier tubes as photo sensors. Geiger-mode avalanche photodiodes (G-APD) promise an improvement in sensitivity and, important for this application, ease of construction, operation and ruggedness. G-APDs have proven many of their features in the laboratory, but a qualified assessment of their performance in an IACT camera is best undertaken with a prototype. This paper describes the design and construction of a full-scale camera based on G-APDs realized within the FACT project (First G-APD Cherenkov Telescope).

  15. NIR-camera-based online diagnostics of laser beam welding processes

    NASA Astrophysics Data System (ADS)

    Dorsch, Friedhelm; Braun, Holger; Keßler, Steffen; Pfitzner, Dieter; Rominger, Volker

    2012-03-01

    We have developed an on-axis camera-based online sensor system for laser beam welding diagnostics that detects the thermal radiation in the near-infrared (NIR) spectral range between 1200 and 1700 nm. In addition to a sensor in the visible (VIS) range, our camera detects the thermal radiation of the weld pool more clearly, and it is also sensible to the radiation of the solidified weld seam. The NIR images are analyzed by real-time image processing. Features are extracted from the images and evaluated to characterize the welding process. Keyhole and weld pool analysis complement VIS diagnostics, whereas the observation of the weld seam and heat affected zone with an NIR camera allows online heat flux thermography. By this means we are able to detect bad joints in overlap weldings ("false friends") online during the welding process.

  16. Mach-zehnder based optical marker/comb generator for streak camera calibration

    SciTech Connect

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  17. A Cherenkov camera with integrated electronics based on the ``Smart Pixel'' concept

    NASA Astrophysics Data System (ADS)

    Bulian, Norbert; Hirsch, Thomas; Hofmann, Werner; Kihm, Thomas; Kohnle, Antje; Panter, Michael; Stein, Michael

    2000-06-01

    An option for the cameras of the HESS telescopes, the concept of a modular camera based on ``Smart Pixels'' was developed. A Smart Pixel contains the photomultiplier, the high voltage supply for the photomultiplier, a dual-gain sample-and-hold circuit with a 14 bit dynamic range, a time-to-voltage converter, a trigger discriminator, trigger logic to detect a coincidence of X=1...7 neighboring pixels, and an analog ratemeter. The Smart Pixels plug into a common backplane which provides power, communicates trigger signals between neighboring pixels, and holds a digital control bus as well as an analog bus for multiplexed readout of pixel signals. The performance of the Smart Pixels has been studied using a 19-pixel test camera. .

  18. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction. PMID:27607253

  19. Shape Function-Based Estimation of Deformation with Moving Cameras Attached to the Deforming Body

    NASA Astrophysics Data System (ADS)

    Jokinen, O.; Ranta, I.; Haggrén, H.; Rönnholm, P.

    2016-06-01

    The paper presents a novel method to measure 3-D deformation of a large metallic frame structure of a crane under loading from one to several images, when the cameras need to be attached to the self deforming body, the structure sways during loading, and the imaging geometry is not optimal due to physical limitations. The solution is based on modeling the deformation with adequate shape functions and taking into account that the cameras move depending on the frame deformation. It is shown that the deformation can be estimated even from a single image of targeted points if the 3-D coordinates of the points are known or have been measured before loading using multiple cameras or some other measuring technique. The precision of the method is evaluated to be 1 mm at best, corresponding to 1:11400 of the average distance to the target.

  20. MTR STACK, TRA710, DETAIL OF BASE. CAMERA FACING NORTH. SIGN ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    MTR STACK, TRA-710, DETAIL OF BASE. CAMERA FACING NORTH. SIGN SAYS "DANGER, DO NOT USE THIS LADDER." TRA-605, PROCESS WATER BUILDING, IN VIEW AT LEFT. INL NEGATIVE NO. HD52-1-3. Mike Crane, Photographer, 5/2005 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  1. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  2. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  3. FPGA-Based Front-End Electronics for Positron Emission Tomography

    PubMed Central

    Haselman, Michael; DeWitt, Don; McDougald, Wendy; Lewellen, Thomas K.; Miyaoka, Robert; Hauck, Scott

    2010-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA’s low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm. PMID:21961085

  4. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    SciTech Connect

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-15

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because {sup 134}Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as {sup 40}K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi{sub 4}Ge{sub 3}O{sub 12} (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from {sup 134}Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  5. A method of color correction of camera based on HSV model

    NASA Astrophysics Data System (ADS)

    Zhao, Rujin; Wang, Jin; Yu, Guobing; Zhong, Jie; Zhou, Wulin; Li, Yihao

    2014-09-01

    A novel color correction method of camera based on HSV (Hue, Saturation, and Value) model is proposed in this paper, which aims at the problem that spectrum response of camera differs from the CIE criterion, and that the image color of camera is aberrant. Firstly, the color of image is corrected based on HSV model to which image is transformed from RGB model. As a result, the color of image accords with the human vision for the coherence between HSV model and human vision; Secondly, the colors checker with 24 kinds of color under standard light source is used to compute correction coefficient matrix, which improves the spectrum response of camera and the CIE criterion. Furthermore, the colors checker with 24 kinds of color improves the applicability of the color correction coefficient matrix for different image. The experimental results show that the color difference between corrected color and color checker is lower based on proposed method, and the corrected color of image is consistent with the human eyes.

  6. Note: A manifold ranking based saliency detection method for camera

    NASA Astrophysics Data System (ADS)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  7. A Kinect™ camera based navigation system for percutaneous abdominal puncture

    NASA Astrophysics Data System (ADS)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable

  8. Development of an angled Si-PM-based detector unit for positron emission mammography (PEM) system

    NASA Astrophysics Data System (ADS)

    Nakanishi, Kouhei; Yamamoto, Seiichi

    2016-11-01

    Positron emission mammography (PEM) systems have higher sensitivity than clinical whole body PET systems because they have a smaller ring diameter. However, the spatial resolution of PEM systems is not high enough to detect early stage breast cancer. To solve this problem, we developed a silicon photomultiplier (Si-PM) based detector unit for the development of a PEM system. Since a Si-PM's channel is small, Si-PM can resolve small scintillator pixels to improve the spatial resolution. Also Si-PM based detectors have inherently high timing resolution and are able to reduce the random coincidence events by reducing the time window. We used 1.5×1.9×15 mm LGSO scintillation pixels and arranged them in an 8×24 matrix to form scintillator blocks. Four scintillator blocks were optically coupled to Si-PM arrays with an angled light guide to form a detector unit. Since the light guide has angles of 5.625°, we can arrange 64 scintillator blocks in a nearly circular shape (a regular 64-sided polygon) using 16 detector units. We clearly resolved the pixels of the scintillator blocks in a 2-dimensional position histogram where the averages of the peak-to-valley ratios (P/Vs) were 3.7±0.3 and 5.7±0.8 in the transverse and axial directions, respectively. The average energy resolution was 14.2±2.1% full-width at half-maximum (FWHM). By including the temperature dependent gain control electronics, the photo-peak channel shifts were controlled within ±1.5% with the temperature from 23 °C to 28 °C. With these results, in addition to the potential high timing performance of Si-PM based detectors, our developed detector unit is promising for the development of a high-resolution PEM system.

  9. Camera-based noncontact metrology for static/dynamic testing of flexible multibody systems

    NASA Astrophysics Data System (ADS)

    Pai, P. Frank; Ramanathan, Suresh; Hu, Jiazhu; Chernova, DarYa K.; Qian, Xin; Wu, Genyong

    2010-08-01

    Presented here is a camera-based noncontact measurement theory for static/dynamic testing of flexible multibody systems that undergo large rigid, elastic and/or plastic deformations. The procedure and equations for accurate estimation of system parameters (i.e. the location and focal length of each camera and the transformation matrix relating its image and object coordinate systems) using an L-frame with four retroreflective markers are described in detail. Moreover, a method for refinement of estimated system parameters and establishment of a lens distortion model for correcting optical distortions using a T-wand with three markers is described. Dynamically deformed geometries of a multibody system are assumed to be obtained by tracing the three-dimensional instantaneous coordinates of markers adhered to the system's outside surfaces, and cameras and triangulation techniques are used for capturing marker images and identifying markers' coordinates. Furthermore, an EAGLE-500 motion analysis system is used to demonstrate measurements of static/dynamic deformations of six different flexible multibody systems. All numerical simulations and experimental results show that the use of camera-based motion analysis systems is feasible and accurate enough for static/dynamic experiments on flexible multibody systems, especially those that cannot be measured using conventional contact sensors.

  10. People re-identification in camera networks based on probabilistic color histograms

    NASA Astrophysics Data System (ADS)

    D'Angelo, Angela; Dugelay, Jean-Luc

    2011-01-01

    People tracking has to face many issues in video surveillance scenarios. One of the most challenging aspect is to re-identify people across different cameras. Humans, indeed, change appearance according to pose, clothes and illumination conditions and thus defining features that are able to robustly describe people moving in a camera network is a not trivial task. While color is widely exploited in the distinction and recognition of objects, most of the color descriptors proposed so far are not robust in complex applications such as video surveillance scenarios. A new color based feature is introduced in this paper to describe the color appearance of the subjects. For each target a probabilistic color histogram (PCH) is built by using a fuzzy K-Nearest Neighbors (KNN) classifier trained on an ad-hoc dataset and is used to match two corresponding appearances of the same person in different cameras of the network. The experimental results show that the defined descriptor is effective at discriminating and re-identifying people across two different video cameras regardless of the viewpoint change between the two views and outperforms state of the art appearance based techniques.

  11. Demonstration of a Novel Positron Source Based on a Plasma Wiggler

    NASA Astrophysics Data System (ADS)

    Johnson, D. K.; Blumenfeld, I.; Barnes, C. D.; Clayton, C. E.; Decker, F. J.; Deng, S.; Emma, P.; Hogan, M. J.; Huang, C.; Ischebeck, R.; Iverson, R.; Joshi, C.; Katsouleas, T. C.; Kirby, N.; Krejcik, P.; Lu, W.; Marsh, K. A.; Mori, W. B.; Muggli, P.; O'Connell, C. L.; Oz, E.; Siemann, R. H.; Walz, D.; Zhou, M.

    2006-11-01

    A new method for generating positrons has been proposed that uses betatron X-rays emitted by an electron beam in a high-K plasma wiggler. The plasma wiggler is an ion column produced by the head of the beam when the peak beam density exceeds the plasma density. The radial electric field of the beam blows out the plasma electrons transversely, creating an ion column. The focusing electric field of the ion column causes the beam electrons to execute betatron oscillations about the ion column axis. If the beam energy and the plasma density are high enough, these oscillations lead to synchrotron radiation in the 1-50 MeV range. A significant amount of electron energy can be lost to these radiated X-ray photons. These photons strike a thin (.5Xo), high-Z target and create e+/e- pairs. The experiment was performed at the Stanford Linear Accelerator Center (SLAC) where a 28.5 GeV electron beam with σr ≈ 10μm and σz ≈ 25μm was propagated through a neutral Lithium vapor (Li). The radial electric field of the dense beam was large enough to field ionize the Li vapor to form a plasma. The positron yield was measured as a function of plasma density, ion column length and electron beam pulse length. A computational model was written to match the experimental data with theory. The measured positron spectra are in excellent agreement with those expected from the calculated X-ray spectral yield from the plasma wiggler. After matching the model with the experimental results, it was used to design a more efficient positron source, giving positron yields of 0.44 e+/e-, a number that is close to the target goal of 1-2 e+/e- for future positron sources.

  12. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  13. Towards direct reconstruction from a gamma camera based on compton scattering

    SciTech Connect

    Cree, M.J.; Bones, P.J. . Dept. of Electrical and Electronic Engineering)

    1994-06-01

    The Compton scattering camera (sometimes called the electronically collimated camera) has been shown by others to have the potential to better the photon counting statistics and the energy resolution of the Anger camera for imaging in SPECT. By using coincident detection of Compton scattering events on two detecting planes, a photon can be localized to having been sourced on the surface of a cone. New algorithms are needed to achieve fully three-dimensional reconstruction of the source distribution from such a camera. If a complete set of cone-surface projections are collected over an infinitely extending plane, it is shown that the reconstruction problem is not only analytically solvable, but also overspecified in the absence of measurement uncertainties. Two approaches to direct reconstruction are proposed, both based on the photons which travel perpendicularly between the detector planes. Results of computer simulations are presented which demonstrate the ability of the algorithms to achieve useful reconstructions in the absence of measurement uncertainties (other than those caused by quantization). The modifications likely to be required in the presence of realistic measurement uncertainties are discussed.

  14. Data acquisition system based on the Nios II for a CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  15. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar

  16. Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides

    NASA Astrophysics Data System (ADS)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of disease. However, dosimetric characteristics in small animal PET imaging are usually overlooked, though the radiation dose may not be negligible. In this work, we constructed 17 mouse models of different body mass and size based on the realistic four-dimensional MOBY mouse model. Particle (photons, electrons and positrons) transport using the Monte Carlo method was performed to calculate the absorbed fractions and S-values for eight positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Y-86 and I-124). Among these radionuclides, O-15 emits positrons with high energy and frequency and produces the highest self-absorbed S-values in each organ, while Y-86 emits γ-rays with high energy and frequency which results in the highest cross-absorbed S-values for non-neighbouring organs. Differences between S-values for self-irradiated organs were between 2% and 3%/g difference in body weight for most organs. For organs irradiating other organs outside the splanchnocoele (i.e. brain, testis and bladder), differences between S-values were lower than 1%/g. These appealing results can be used to assess variations in small animal dosimetry as a function of total-body mass. The generated database of S-values for various radionuclides can be used in the assessment of radiation dose to mice from different radiotracers in small animal PET experiments, thus offering quantitative figures for comparative dosimetry research in small animal models.

  17. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  18. Portable Positron Measurement System (PPMS)

    SciTech Connect

    2011-01-01

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  19. Portable Positron Measurement System (PPMS)

    ScienceCinema

    None

    2016-07-12

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  20. Scale-unambiguous relative pose estimation of space uncooperative targets based on the fusion of three-dimensional time-of-flight camera and monocular camera

    NASA Astrophysics Data System (ADS)

    Hao, Gangtao; Du, Xiaoping; Chen, Hang; Song, Jianjun; Gao, Tengfei

    2015-05-01

    An approach of scale-unambiguous relative pose estimation for space uncooperative targets based on the fusion of low resolution three-dimensional time-of-flight camera and monocular camera is proposed. No a priori knowledge about the targets is assumed. First, a modified range-intensity Markov random field model is presented to quickly reconstruct the range value for each feature point. Second, the scale-ambiguous relative pose estimation algorithm based on extended Kalman filter-unscented Kalman filter-particle filter combination filter is designed in vision simultaneous localization and mapping framework. Third, the overall scale factor estimation approach based on range-intensity fusion image, which takes the feature points' range reconstruction uncertainty as measurement noise, is proposed for the final scale-unambiguous pose estimation. Finally, some simulations demonstrate the validity and capability of the proposed approach.

  1. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  2. Robust range estimation with a monocular camera for vision-based forward collision warning system.

    PubMed

    Park, Ki-Yeong; Hwang, Sun-Young

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  3. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    NASA Astrophysics Data System (ADS)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  4. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  5. Star-field identification algorithm. [for implementation on CCD-based imaging camera

    NASA Technical Reports Server (NTRS)

    Scholl, M. S.

    1993-01-01

    A description of a new star-field identification algorithm that is suitable for implementation on CCD-based imaging cameras is presented. The minimum identifiable star pattern element consists of an oriented star triplet defined by three stars, their celestial coordinates, and their visual magnitudes. The algorithm incorporates tolerance to faulty input data, errors in the reference catalog, and instrument-induced systematic errors.

  6. The computation of cloud base height from paired whole-sky imaging cameras

    SciTech Connect

    Allmen, M.C.; Kegelmeyer, W.P. Jr.

    1994-03-01

    A major goal for global change studies is to improve the accuracy of general circulation models (GCMs) capable of predicting the timing and magnitude of greenhouse gas-induced global warming. Research has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climate responses to human activity. Of particular value to reducing the uncertainties associated with cloud-radiation interactions is the measurement of cloud base height (CBH), both because it is a dominant factor in determining the infrared radiative properties of clouds with respect to the earth`s surface and lower atmosphere and because CBHs are essential to measuring cloud cover fraction. We have developed a novel approach to the extraction of cloud base height from pairs of whole sky imaging (WSI) cameras. The core problem is to spatially register cloud fields from widely separated WSI cameras; this complete, triangulation provides the CBH measurements. The wide camera separation (necessary to cover the desired observation area) and the self-similarity of clouds defeats all standard matching algorithms when applied to static views of the sky. To address this, our approach is based on optical flow methods that exploit the fact that modern WSIs provide sequences of images. We will describe the algorithm and present its performance as evaluated both on real data validated by ceilometer measurements and on a variety of simulated cases.

  7. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  8. Line-based camera calibration with lens distortion correction from a single image

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Gao, He; Wang, Yexin

    2013-12-01

    Camera calibration is a fundamental and important step in many machine vision applications. For some practical situations, computing camera parameters from merely a single image is becoming increasingly feasible and significant. However, the existing single view based calibration methods have various disadvantages such as ignoring lens distortion, requiring some prior knowledge or special calibration environment, and so on. To address these issues, we propose a line-based camera calibration method with lens distortion correction from a single image using three squares with unknown length. Initially, the radial distortion coefficients are obtained through a non-linear optimization process which is isolated from the pin-hole model calibration, and the detected distorted lines of all the squares are corrected simultaneously. Subsequently, the corresponding lines used for homography estimation are normalized to avoid the specific unstable case, and the intrinsic parameters are calculated from the sufficient restrictions provided by vectors of homography matrix. To evaluate the performance of the proposed method, both simulative and real experiments have been carried out and the results show that the proposed method is robust under general conditions and it achieves comparable measurement accuracy in contrast with the traditional multiple view based calibration method using 2D chessboard target.

  9. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  10. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors.

    PubMed

    Calderón, Y; Chmeissani, M; Kolstein, M; De Lorenzo, G

    2014-06-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm(2) area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm(3). The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10(4) Bq source activities with equal efficiency and is completely saturated at 10(9) Bq. The efficiency of the system is evaluated using a simulated (18)F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8.

  11. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  12. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-06-18

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.

  13. Monte Carlo simulations of compact gamma cameras based on avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Després, Philippe; Funk, Tobias; Shah, Kanai S.; Hasegawa, Bruce H.

    2007-06-01

    Avalanche photodiodes (APDs), and in particular position-sensitive avalanche photodiodes (PSAPDs), are an attractive alternative to photomultiplier tubes (PMTs) for reading out scintillators for PET and SPECT. These solid-state devices offer high gain and quantum efficiency, and can potentially lead to more compact and robust imaging systems with improved spatial and energy resolution. In order to evaluate this performance improvement, we have conducted Monte Carlo simulations of gamma cameras based on avalanche photodiodes. Specifically, we investigated the relative merit of discrete and PSAPDs in a simple continuous crystal gamma camera. The simulated camera was composed of either a 4 × 4 array of four channels 8 × 8 mm2 PSAPDs or an 8 × 8 array of 4 × 4 mm2 discrete APDs. These configurations, requiring 64 channels readout each, were used to read the scintillation light from a 6 mm thick continuous CsI:Tl crystal covering the entire 3.6 × 3.6 cm2 photodiode array. The simulations, conducted with GEANT4, accounted for the optical properties of the materials, the noise characteristics of the photodiodes and the nonlinear charge division in PSAPDs. The performance of the simulated camera was evaluated in terms of spatial resolution, energy resolution and spatial uniformity at 99mTc (140 keV) and 125I (ap30 keV) energies. Intrinsic spatial resolutions of 1.0 and 0.9 mm were obtained for the APD- and PSAPD-based cameras respectively for 99mTc, and corresponding values of 1.2 and 1.3 mm FWHM for 125I. The simulations yielded maximal energy resolutions of 7% and 23% for 99mTc and 125I, respectively. PSAPDs also provided better spatial uniformity than APDs in the simple system studied. These results suggest that APDs constitute an attractive technology especially suitable to build compact, small field of view gamma cameras dedicated, for example, to small animal or organ imaging.

  14. Scintillators for positron emission tomography

    SciTech Connect

    Moses, W.W.; Derenzo, S.E.

    1995-09-01

    Like most applications that utilize scintillators for gamma detection, Positron Emission Tomography (PET) desires materials with high light output, short decay time, and excellent stopping power that are also inexpensive, mechanically rugged, and chemically inert. Realizing that this ``ultimate`` scintillator may not exist, this paper evaluates the relative importance of these qualities and describes their impact on the imaging performance of PET. The most important PET scintillator quality is the ability to absorb 511 keV photons in a small volume, which affects the spatial resolution of the camera. The dominant factor is a short attenuation length ({le} 1.5 cm is required), although a high photoelectric fraction is also important (> 30% is desired). The next most important quality is a short decay time, which affects both the dead time and the coincidence timing resolution. Detection rates for single 511 keV photons can be extremely high, so decay times {le} 500 ns are essential to avoid dead time losses. In addition, positron annihilations are identified by time coincidence so {le}5 ns fwhm coincidence pair timing resolution is required to identify events with narrow coincidence windows, reducing contamination due to accidental coincidences. Current trends in PET cameras are toward septaless, ``fully-3D`` cameras, which have significantly higher count rates than conventional 2-D cameras and so place higher demands on scintillator decay time. Light output affects energy resolution, and thus the ability of the camera to identify and reject events where the initial 511 keV photon has undergone Compton scatter in the patient. The scatter to true event fraction is much higher in fully-3D cameras than in 2-D cameras, so future PET cameras would benefit from scintillators with a 511 keV energy resolution < 10--12% fwhm.

  15. Binarization method based on evolution equation for document images produced by cameras

    NASA Astrophysics Data System (ADS)

    Wang, Yan; He, Chuanjiang

    2012-04-01

    We present an evolution equation-based binarization method for document images produced by cameras. Unlike the existing thresholding techniques, the idea behind our method is that a family of gradually binarized images is obtained by the solution of an evolution partial differential equation, starting with an original image. In our formulation, the evolution is controlled by a global force and a local force, both of which have opposite sign inside and outside the object of interests in the original image. A simple finite difference scheme with a significantly larger time step is used to solve the evolution equation numerically; the desired binarization is typically obtained after only one or two iterations. Experimental results on 122 camera document images show that our method yields good visual quality and OCR performance.

  16. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  17. Unsteady pressure-sensitive paint measurement based on the heterodyne method using low frame rate camera

    NASA Astrophysics Data System (ADS)

    Matsuda, Yu; Yorita, Daisuke; Egami, Yasuhiro; Kameya, Tomohiro; Kakihara, Noriaki; Yamaguchi, Hiroki; Asai, Keisuke; Niimi, Tomohide

    2013-10-01

    The pressure-sensitive paint technique based on the heterodyne method was proposed for the precise pressure measurement of unsteady flow fields. This measurement is realized by detecting the beat signal that results from interference between a modulating illumination light source and a pressure fluctuation. The beat signal is captured by a camera with a considerably lower frame rate than the frequency of the pressure fluctuation. By carefully adjusting the frequency of the light and the camera frame rate, the signal at the frequency of interest is detected, while the noise signals at other frequencies are eliminated. To demonstrate the proposed method, we measured the pressure fluctuations in a resonance tube at the fundamental, second, and third harmonics. The pressure fluctuation distributions were successfully obtained and were consistent with measurements from a pressure transducer. The proposed method is a useful technique for measuring unsteady phenomena.

  18. Unsteady pressure-sensitive paint measurement based on the heterodyne method using low frame rate camera.

    PubMed

    Matsuda, Yu; Yorita, Daisuke; Egami, Yasuhiro; Kameya, Tomohiro; Kakihara, Noriaki; Yamaguchi, Hiroki; Asai, Keisuke; Niimi, Tomohide

    2013-10-01

    The pressure-sensitive paint technique based on the heterodyne method was proposed for the precise pressure measurement of unsteady flow fields. This measurement is realized by detecting the beat signal that results from interference between a modulating illumination light source and a pressure fluctuation. The beat signal is captured by a camera with a considerably lower frame rate than the frequency of the pressure fluctuation. By carefully adjusting the frequency of the light and the camera frame rate, the signal at the frequency of interest is detected, while the noise signals at other frequencies are eliminated. To demonstrate the proposed method, we measured the pressure fluctuations in a resonance tube at the fundamental, second, and third harmonics. The pressure fluctuation distributions were successfully obtained and were consistent with measurements from a pressure transducer. The proposed method is a useful technique for measuring unsteady phenomena.

  19. A mobile phone-based retinal camera for portable wide field imaging.

    PubMed

    Maamari, Robi N; Keenan, Jeremy D; Fletcher, Daniel A; Margolis, Todd P

    2014-04-01

    Digital fundus imaging is used extensively in the diagnosis, monitoring and management of many retinal diseases. Access to fundus photography is often limited by patient morbidity, high equipment cost and shortage of trained personnel. Advancements in telemedicine methods and the development of portable fundus cameras have increased the accessibility of retinal imaging, but most of these approaches rely on separate computers for viewing and transmission of fundus images. We describe a novel portable handheld smartphone-based retinal camera capable of capturing high-quality, wide field fundus images. The use of the mobile phone platform creates a fully embedded system capable of acquisition, storage and analysis of fundus images that can be directly transmitted from the phone via the wireless telecommunication system for remote evaluation. PMID:24344230

  20. Stereoscopic ground-based determination of the cloud base height: theory of camera position calibration with account for lens distortion

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Postylyakov, Oleg V.

    2016-05-01

    For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The earlier paper described a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras in paraxial (first-order geometric optics) approximation and was applied for the central part of the sky frames. This paper describes the model of experiment which takes into account lens distortion in Seidel approximation (depending on the third order of the distance from optical axis). We developed procedure of simultaneous camera position calibration and estimation of parameters of lens distortion in Seidel approximation.

  1. Infrared line cameras based on linear arrays for industrial temperature measurement

    NASA Astrophysics Data System (ADS)

    Drogmoeller, Peter; Hofmann, Guenter; Budzier, Helmut; Reichardt, Thomas; Zimmerhackl, Manfred

    2002-03-01

    The PYROLINE/ MikroLine cameras provide continuous, non-contact measurement of linear temperature distributions. Operation in conjunction with the IR_LINE software provides data recording, real-time graphical analysis, process integration and camera-control capabilities. One system is based on pyroelectric line sensors with either 128 or 256 elements, operating at frame rates of 128 and 544 Hz respectively. Temperatures between 0 and 1300DGRC are measurable in four distinct spectral ranges; 8-14micrometers for low temperatures, 3-5micrometers for medium temperatures, 4.8-5.2micrometers for glass-temperature applications and 1.4-1.8micrometers for high temperatures. A newly developed IR-line camera (HRP 250) based upon a thermoelectrically cooled, 160-element, PbSe detector array operating in the 3 - 5 micrometers spectral range permits the thermal gradients of fast moving targets to be measured in the range 50 - 180 degree(s)C at a maximum frequency of 18kHz. This special system was used to measure temperature distributions on rotating tires at velocities of more than 300 km/h (190 mph). A modified version of this device was used for real-time measurement of disk-brake rotors under load. Another line camera consisting a 256 element InGaAs array was developed for the spectral range of 1.4 - 1.8 micrometers to detect impurities of polypropylene and polyethylene in raw cotton at frequencies of 2.5 - 5 kHz.

  2. AOTF-based NO2 camera, results from the AROMAT-2 campaign

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Fussen, Didier; Vanhamel, Jurgen; Van Opstal, Bert; Maes, Jeroen; Merlaud, Alexis; Stebel, Kerstin; Schuettemeyer, Dirk

    2016-04-01

    A hyperspectral imager based on an acousto-optical tunable filter (AOTF) has been developed in the frame of the ALTIUS mission (atmospheric limb tracker for the investigation of the upcoming stratosphere). ALTIUS is a three-channel (UV, VIS, NIR) space-borne limb sounder aiming at the retrieval of concentration profiles of important trace species (O3, NO2, aerosols and more) with a good vertical resolution. An optical breadboard was built from the VIS channel concept and is now serving as a ground-based remote sensing instrument. Its good spectral resolution (0.6nm) coupled to its natural imaging capabilities (6° square field of view sampled by a 512x512 pixels sensor) make it suitable for the measurement of 2D fields of NO2, similarly to what is nowadays achieved with SO2 cameras. Our NO2 camera was one of the instruments that took part to the second Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT-2) campaign in August 2015. It was pointed to the smokestacks of the coal and oil burning power plant of Turceni (Romania) in order to image the exhausted field of NO2 and derive slant columns and instantaneous emission fluxes. The ultimate goal of the AROMAT campaigns is to prepare the validation of TROPOMI onboard Sentinel-5P. We will briefly describe the instrumental concept of the NO2 camera, its heritage from the ALTIUS mission, and its advantages compared to previous attempts of reaching the same goal. Key results obtained with the camera during the AROMAT-2 campaign will be presented and further improvements will be discussed.

  3. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  4. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera.

    PubMed

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D; Sclabassi, Robert J; Mao, Zhi-Hong; Sun, Mingui

    2011-06-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances.

  5. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  6. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement. PMID:26928458

  7. Motion measurement of SAR antenna based on high frame rate camera

    NASA Astrophysics Data System (ADS)

    Li, Q.; Cao, R.; Feng, H.; Xu, Z.

    2015-03-01

    Synthetic Aperture Radar (SAR) is currently in the marine, agriculture, geology and other fields are widely used, while the SAR antenna is one of the most important subsystems. Performance of antenna has a significant impact on the SAR sensitivity, azimuth resolution, image blur degree and other parameter. To improve SAR resolution, SAR antenna is designed and fabricated according to flexible expandable style. However, the movement of flexible antenna will have a greater impact on accuracy of SAR systems, so the motion measurement of the flexible antenna is an urgent problem. This paper studied motion measurements method based on high frame rate camera, designed and completed a flexible antenna motion measurement experiment. In the experiment the main IMU and the sub IMU were placed at both ends of the cantilever, which is simulation of flexible antenna, the high frame rate camera was placed above the main IMU, and the imaging target was set on side of the sub IMU. When the cantilever motion occurs, IMU acquired spatial coordinates of cantilever movement in real-time, and high frame rate camera captured a series of target images, and then the images was input into JTC to obtain the cantilever motion coordinates. Through the contrast and analysis of measurement results, the measurement accuracy of flexible antenna motion is verified.

  8. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  9. Digital camera and smartphone as detectors in paper-based chemiluminometric genotyping of single nucleotide polymorphisms.

    PubMed

    Spyrou, Elena M; Kalogianni, Despina P; Tragoulias, Sotirios S; Ioannou, Penelope C; Christopoulos, Theodore K

    2016-10-01

    Chemi(bio)luminometric assays have contributed greatly to various areas of nucleic acid analysis due to their simplicity and detectability. In this work, we present the development of chemiluminometric genotyping methods in which (a) detection is performed by using either a conventional digital camera (at ambient temperature) or a smartphone and (b) a lateral flow assay configuration is employed for even higher simplicity and suitability for point of care or field testing. The genotyping of the C677T single nucleotide polymorphism (SNP) of methylenetetrahydropholate reductase (MTHFR) gene is chosen as a model. The interrogated DNA sequence is amplified by polymerase chain reaction (PCR) followed by a primer extension reaction. The reaction products are captured through hybridization on the sensing areas (spots) of the strip. Streptavidin-horseradish peroxidase conjugate is used as a reporter along with a chemiluminogenic substrate. Detection of the emerging chemiluminescence from the sensing areas of the strip is achieved by digital camera or smartphone. For this purpose, we constructed a 3D-printed smartphone attachment that houses inexpensive lenses and converts the smartphone into a portable chemiluminescence imager. The device enables spatial discrimination of the two alleles of a SNP in a single shot by imaging of the strip, thus avoiding the need of dual labeling. The method was applied successfully to genotyping of real clinical samples. Graphical abstract Paper-based genotyping assays using digital camera and smartphone as detectors.

  10. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  11. Random versus Game Trail-Based Camera Trap Placement Strategy for Monitoring Terrestrial Mammal Communities

    PubMed Central

    Cusack, Jeremy J.; Dickman, Amy J.; Rowcliffe, J. Marcus; Carbone, Chris; Macdonald, David W.; Coulson, Tim

    2015-01-01

    Camera trap surveys exclusively targeting features of the landscape that increase the probability of photographing one or several focal species are commonly used to draw inferences on the richness, composition and structure of entire mammal communities. However, these studies ignore expected biases in species detection arising from sampling only a limited set of potential habitat features. In this study, we test the influence of camera trap placement strategy on community-level inferences by carrying out two spatially and temporally concurrent surveys of medium to large terrestrial mammal species within Tanzania’s Ruaha National Park, employing either strictly game trail-based or strictly random camera placements. We compared the richness, composition and structure of the two observed communities, and evaluated what makes a species significantly more likely to be caught at trail placements. Observed communities differed marginally in their richness and composition, although differences were more noticeable during the wet season and for low levels of sampling effort. Lognormal models provided the best fit to rank abundance distributions describing the structure of all observed communities, regardless of survey type or season. Despite this, carnivore species were more likely to be detected at trail placements relative to random ones during the dry season, as were larger bodied species during the wet season. Our findings suggest that, given adequate sampling effort (> 1400 camera trap nights), placement strategy is unlikely to affect inferences made at the community level. However, surveys should consider more carefully their choice of placement strategy when targeting specific taxonomic or trophic groups. PMID:25950183

  12. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, V.; Djambazov, L.; Dorner, D.; Farnier, C.; Gendotti, A.; Grimm, O.; von Gunten, H. P.; Hildebrand, D.; Horisberger, U.; Huber, B.; Kim, K.-S.; Köhne, J.-H.; Krähenbühl, T.; Krumm, B.; Lee, M.; Lenain, J.-P.; Lorenz, E.; Lustermann, W.; Lyard, E.; Mannheim, K.; Meharga, M.; Neise, D.; Nessi-Tedaldi, F.; Overkemping, A.-K.; Pauss, F.; Renker, D.; Rhode, W.; Ribordy, M.; Rohlfs, R.; Röser, U.; Stucki, J.-P.; Thaele, J.; Tibolla, O.; Viertel, G.; Vogler, P.; Walter, R.; Warda, K.; Weitzel, Q.

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  13. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.

    PubMed

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  14. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    PubMed Central

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  15. Optimum design of the carbon fiber thin-walled baffle for the space-based camera

    NASA Astrophysics Data System (ADS)

    Yan, Yong; Song, Gu; Yuan, An; Jin, Guang

    2011-08-01

    The thin-walled baffle design of the space-based camera is an important job in the lightweight space camera research task for its stringent quality requirement and harsh mechanical environment especially for the thin-walled baffle of the carbon fiber design. In the paper, an especially thin-walled baffle of the carbon fiber design process was described and it is sound significant during the other thin-walled baffle design of the space camera. The designer obtained the design margin of the thin-walled baffle that structural stiffness and strength can tolerated belong to its development requirements through the appropriate use of the finite element analysis of the walled parameters influence sensitivity to its structural stiffness and strength. And the designer can determine the better optimization criterion of thin-walled baffle during the geometric parameter optimization process in such guiding principle. It sounds significant during the optimum design of the thin-walled baffle of the space camera. For structural stiffness and strength of the carbon fibers structure which can been designed, the effect of the optimization will be more remarkable though the optional design of the parameters chose. Combination of manufacture process and design requirements the paper completed the thin-walled baffle structure scheme selection and optimized the specific carbon fiber fabrication technology though the FEM optimization, and the processing cost and process cycle are retrenchment/saved effectively in the method. Meanwhile, the weight of the thin-walled baffle reduced significantly in meet the design requirements under the premise of the structure. The engineering prediction had been adopted, and the related result shows that the thin-walled baffle satisfied the space-based camera engineering practical needs very well, its quality reduced about 20%, the final assessment index of the thin-walled baffle were superior to the overall design requirements significantly. The design

  16. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    NASA Astrophysics Data System (ADS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-07-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an

  17. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  18. Treatment modification of yttrium-90 radioembolization based on quantitative positron emission tomography/CT imaging.

    PubMed

    Chang, Ted T; Bourgeois, Austin C; Balius, Anastasia M; Pasciak, Alexander S

    2013-03-01

    Treatment activity for yttrium-90 ((90)Y) radioembolization when calculated by using the manufacturer-recommended technique is only partially patient-specific and may result in a subtumoricidal dose in some patients. The authors describe the use of quantitative (90)Y positron emission tomography/computed tomography as a tool to provide patient-specific optimization of treatment activity and evaluate this new method in a patient who previously received traditional (90)Y radioembolization. The modified treatment resulted in a 40-Gy increase in absorbed dose to tumor and complete resolution of disease in the treated area within 3 months.

  19. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging.

    PubMed

    Mejia, J; Galvis-Alonso, O Y; Castro, A A de; Braga, J; Leite, J P; Simões, M V

    2010-12-01

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.

  20. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  1. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  2. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  3. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    NASA Astrophysics Data System (ADS)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  4. A positron study on the microstructural evolution of Al-Li based alloys in the early stages of plastic deformation

    SciTech Connect

    Diego, N. de; Rio, J. del; Romero, R.; Somoza, A. |

    1997-11-01

    The formation of voids by coalescence of microvoids initiated at precipitates has been proposed to explain the fracture mechanisms in alloys containing a large number of second phase particles whereas in binary Al-Li alloys with shearable particles the brittleness could be linked with the grain boundary fracture. Most of the microstructure studies of Al-Li alloys have been performed by deforming to fracture; however, little is known about the processes and mechanisms involved in the early stages of plastic deformation. Butler et al. have studied a quaternary Al-Li alloy and have found that there is a critical effective strain to cause voiding, which is about 0.06 and 0.1% for the aged and for the solution treated material respectively. It is very well established that positrons are very sensitive to vacancy-like defects. With the aim of clarifying the behavior of Al-Li based alloys in the very early stages of deformation, and detecting the eventual formation of microvoids, the authors have studied the response of the positron lifetime parameters to the degrees of deformation in age-hardenable Al-Li based alloys plastically deformed under tensile stress.

  5. Positron emission mammography imaging

    SciTech Connect

    Moses, William W.

    2003-10-02

    This paper examines current trends in Positron Emission Mammography (PEM) instrumentation and the performance tradeoffs inherent in them. The most common geometry is a pair of parallel planes of detector modules. They subtend a larger solid angle around the breast than conventional PET cameras, and so have both higher efficiency and lower cost. Extensions to this geometry include encircling the breast, measuring the depth of interaction (DOI), and dual-modality imaging (PEM and x-ray mammography, as well as PEM and x-ray guided biopsy). The ultimate utility of PEM may not be decided by instrument performance, but by biological and medical factors, such as the patient to patient variation in radiotracer uptake or the as yet undetermined role of PEM in breast cancer diagnosis and treatment.

  6. Multiple views merging from different cameras in fringe-projection based phase-shifting method

    NASA Astrophysics Data System (ADS)

    Hu, Qingying; Harding, Kevin; Hamilton, Don; Flint, Jay

    2007-09-01

    This paper discusses issues related to accurate measurement using multiple cameras with phase-shifting techniques. Phase-shifting methods have been widely used in industrial inspections due to high accuracy and excellent tolerance to surface finish. But so far, most such systems use only one camera. In our applications to inspect manufactured part with complex shapes, one camera cannot capture the whole surface because of occlusions, double bounced light, and the limited dynamic range of cameras. Multiple cameras have to be used and the data from different cameras must be merged together. Because different cameras have individual error sources when a part is to be measured, it is a challenge to obtain the same shape, in the same 3D coordinates system from all cameras without data manipulation such as iterative registration. This paper addresses this challenge of data registration. The error sources are analyzed and demonstrated and several paths for error reduction are presented. Experiment results show the significant improvement obtained.

  7. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  8. Intense source of slow positrons

    NASA Astrophysics Data System (ADS)

    Perez, P.; Rosowsky, A.

    2004-10-01

    We describe a novel design for an intense source of slow positrons based on pair production with a beam of electrons from a 10 MeV accelerator hitting a thin target at a low incidence angle. The positrons are collected with a set of coils adapted to the large production angle. The collection system is designed to inject the positrons into a Greaves-Surko trap (Phys. Rev. A 46 (1992) 5696). Such a source could be the basis for a series of experiments in fundamental and applied research and would also be a prototype source for industrial applications, which concern the field of defect characterization in the nanometer scale.

  9. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  10. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  11. The feasibility of photo-based 3D modeling for the structures by using a common digital camera

    NASA Astrophysics Data System (ADS)

    Li, Ping; Zhang, Jin-quan; Li, Wan-heng; Lv, Jian-ming; Wang, Xin-zheng

    2011-12-01

    This article explored the method of photo-based 3D modeling for the arc bridge structures by ordinary digital camera. Firstly, a series of processes had been studied by using ordinary digital camera that included the camera calibration, data acquisition, data management, and 3D orientation, setting scale and textures, etc., then the 3D model from photos can be built. The model can be measured, edited and close to the real structures. Take an interior masonry arch bridge as an example, build 3D model through the processes above by using camera HP CB350. The 3D model can be integrated with the loading conditions and material properties, to provide the detailed data for analyzing the structure. This paper has accumulated the experience in data acquisition and modeling methods. The methods can be applied to other structural analysis, and other conditions of 3D modeling with fast and economic advantages.

  12. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    PubMed Central

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  13. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  14. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  15. Positron trapping at grain boundaries

    SciTech Connect

    Dupasquier, A. ); Romero, R.; Somoza, A. )

    1993-10-01

    The standard positron trapping model has often been applied, as a simple approximation, to the interpretation of positron lifetime spectra in situations of diffusion-controlled trapping. This paper shows that this approximation is not sufficiently accurate, and presents a model based on the correct solution of the diffusion equation, in the version appropriate for studying positron trapping at grain boundaries. The model is used for the analysis of new experimental data on positron lifetime spectra in a fine-grained Al-Ca-Zn alloy. Previous results on similar systems are also discussed and reinterpreted. The analysis yields effective diffusion coefficients not far from the values known for the base metals of the alloys.

  16. Early sinkhole detection using a drone-based thermal camera and image processing

    NASA Astrophysics Data System (ADS)

    Lee, Eun Ju; Shin, Sang Young; Ko, Byoung Chul; Chang, Chunho

    2016-09-01

    Accurate advance detection of the sinkholes that are occurring more frequently now is an important way of preventing human fatalities and property damage. Unlike naturally occurring sinkholes, human-induced ones in urban areas are typically due to groundwater disturbances and leaks of water and sewage caused by large-scale construction. Although many sinkhole detection methods have been developed, it is still difficult to predict sinkholes that occur in depth areas. In addition, conventional methods are inappropriate for scanning a large area because of their high cost. Therefore, this paper uses a drone combined with a thermal far-infrared (FIR) camera to detect potential sinkholes over a large area based on computer vision and pattern classification techniques. To make a standard dataset, we dug eight holes of depths 0.5-2 m in increments of 0.5 m and with a maximum width of 1 m. We filmed these using the drone-based FIR camera at a height of 50 m. We first detect candidate regions by analysing cold spots in the thermal images based on the fact that a sinkhole typically has a lower thermal energy than its background. Then, these regions are classified into sinkhole and non-sinkhole classes using a pattern classifier. In this study, we ensemble the classification results based on a light convolutional neural network (CNN) and those based on a Boosted Random Forest (BRF) with handcrafted features. We apply the proposed ensemble method successfully to sinkhole data for various sizes and depths in different environments, and prove that the CNN ensemble and the BRF one with handcrafted features are better at detecting sinkholes than other classifiers or standalone CNN.

  17. Fast time-of-flight camera based surface registration for radiotherapy patient positioning

    SciTech Connect

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-15

    Purpose: This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. Methods: A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an ''ICP only'' strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. Results: The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 {+-} 1.08 mm and 0.07 deg. {+-} 0.05 deg., respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. Conclusions: The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface

  18. Appearance based key-shot selection for a hand held camera

    NASA Astrophysics Data System (ADS)

    Alefs, Bram; Dijk, Judith

    2009-05-01

    Automatic selection of key-shots is an important step for video data processing. Depending on the purpose, key-shot selection provides user feed back on recorded data, storage reduction and viewpoint selection and it can be used for panoramic image stitching and 3D-reconstruction. In particular, investigating scenes of crime or accidental investigations involves large amount of data, containing information on physical arrangement of objects, details on surface geometry and appearances. This paper proposes an efficient method for automatic selection of key-shot, providing onsite feedback on recorded segments and automatic selection of view-points for 3D-reconstruction. It uses appearance based object and scene modeling for a freely moving, hand held camera. The camera motion is determined on two levels, comparing appearances of local image regions and full 3D reconstruction. On the lower level, the 2D-warp between subsequent video frames is used to determine local change of image appearance and derive a set of motion key frames. These keyframes than are used to determine full 3D motion and to reconstruct objects. Furthermore, key-frames are used for fast indexation and detection of loop closures. Examples for automatic key-frame selection are given for an re-enacted crime scene, and compared to manual selection.

  19. Development of practical investigation system for cultural properties based on a projector-camera system

    NASA Astrophysics Data System (ADS)

    Miyata, Kimiyoshi

    2009-01-01

    Huge number of historical materials and cultural properties in museums are investigated based on scientific and chemical analysis techniques, however these techniques require specific equipments with difficult operations. In this research, a practical investigation system is developed to provide convenient way for investigating color information in variety of materials as the first step of the investigation process. The system consists of a data projector and a digital camera, and the system is examined to detect metameric color area as an example of investigation purposes. In this research, the data projector has a role of illuminator to yield illuminant metamerism, and the camera takes sequential images under different illumination colors created by the projector. In the experiment, seven colors of illumination are created by the projector, then images are taken under each colored illumination. The Euclid distance in the RGB space between a predetermined reference pixel and test pixels is calculated in each of taken images. These distances are compared with thresholds determined by using a metamerism test chart. The proposed system was examined for the test charts and Japanese authentic wooden prints, and the experimental results showed that the system could offer convenience as the first investigation of the materials.

  20. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  1. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  2. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  3. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  4. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  5. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera.

    PubMed

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-06-22

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches.

  6. Real-time 3D measurement based on structured light illumination considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, ShiLing

    2014-12-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. In traditional 3-D measurement system where the processing time is not a key factor, camera lens distortion correction is performed directly. However, for the time-critical high-speed applications, the time-consuming correction algorithm is inappropriate to be performed directly during the real-time process. To cope with this issue, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. And a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the merit of the LUT, the 3-D reconstruction can be achieved at 92.34 frames per second.

  7. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera.

    PubMed

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  8. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera.

    PubMed

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  9. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  10. Ground-based analysis of volcanic ash plumes using a new multispectral thermal infrared camera approach

    NASA Astrophysics Data System (ADS)

    Williams, D.; Ramsey, M. S.

    2015-12-01

    Volcanic plumes are complex mixtures of mineral, lithic and glass fragments of varying size, together with multiple gas species. These plumes vary in size dependent on a number of factors, including vent diameter, magma composition and the quantity of volatiles within a melt. However, determining the chemical and mineralogical properties of a volcanic plume immediately after an eruption is a great challenge. Thermal infrared (TIR) satellite remote sensing of these plumes is routinely used to calculate the volcanic ash particle size variations and sulfur dioxide concentration. These analyses are commonly performed using high temporal, low spatial resolution satellites, which can only reveal large scale trends. What is lacking is a high spatial resolution study specifically of the properties of the proximal plumes. Using the emissive properties of volcanic ash, a new method has been developed to determine the plume's particle size and petrology in spaceborne and ground-based TIR data. A multispectral adaptation of a FLIR TIR camera has been developed that simulates the TIR channels found on several current orbital instruments. Using this instrument, data of volcanic plumes from Fuego and Santiaguito volcanoes in Guatemala were recently obtained Preliminary results indicate that the camera is capable of detecting silicate absorption features in the emissivity spectra over the TIR wavelength range, which can be linked to both mineral chemistry and particle size. It is hoped that this technique can be expanded to isolate different volcanic species within a plume, validate the orbital data, and ultimately to use the results to better inform eruption dynamics modelling.

  11. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera

    PubMed Central

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  12. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  13. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. PMID:23312587

  14. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras.

    PubMed

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%.

  15. Design of an Event-Driven, Random-Access, Windowing CCD-Based Camera

    NASA Astrophysics Data System (ADS)

    Monacos, S. P.; Lam, R. K.; Portillo, A. A.; Zhu, D. Q.; Ortiz, G. G.

    2003-11-01

    Commercially available cameras are not designed for a combination of single-frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROIs). A message-passing paradigm is defined to achieve low-level camera control with high-level system operation. This functionality is achieved by asynchronously sending messages to the camera for event-driven operation, where an event is defined as image capture or pixel readout of a ROI, without knowledge of detailed in-camera timing. This methodology provides a random access, real-time, event-driven (RARE) camera for adaptive camera control and is well suited for target-tracking applications requiring autonomous control of multiple ROIs. This methodology additionally provides for reduced ROI readout time and higher frame rates as compared to a predecessor architecture [1] by avoiding external control intervention during the ROI readout process.

  16. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-01

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  17. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-01

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications. PMID:26086805

  18. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  19. High resolution three-dimensional photoacoutic tomography with CCD-camera based ultrasound detection

    PubMed Central

    Nuster, Robert; Slezak, Paul; Paltauf, Guenther

    2014-01-01

    A photoacoustic tomograph based on optical ultrasound detection is demonstrated, which is capable of high resolution real-time projection imaging and fast three-dimensional (3D) imaging. Snapshots of the pressure field outside the imaged object are taken at defined delay times after photoacoustic excitation by use of a charge coupled device (CCD) camera in combination with an optical phase contrast method. From the obtained wave patterns photoacoustic projection images are reconstructed using a back propagation Fourier domain reconstruction algorithm. Applying the inverse Radon transform to a set of projections recorded over a half rotation of the sample provides 3D photoacoustic tomography images in less than one minute with a resolution below 100 µm. The sensitivity of the device was experimentally determined to be 5.1 kPa over a projection length of 1 mm. In vivo images of the vasculature of a mouse demonstrate the potential of the developed method for biomedical applications. PMID:25136491

  20. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  1. Portable profilometer based on low-coherence interferometry and smart pixel camera

    NASA Astrophysics Data System (ADS)

    Salbut, Leszek; Pakuła, Anna; Tomczewski, Sławomir; Styk, Adam

    2010-09-01

    Although low coherence interferometers are commercially available (e.g., white light interferometers), they are generally quite bulky, expensive, and offer limited flexibility. In the paper the new portable profilometer based on low coherence interferometry is presented. In the device the white light diode with controlled spectrum shape is used in order to increase the zero order fringe contrast, what allows for its better and quicker localization. For image analysis the special type of CMOS matrix (called smart pixel camera), synchronized with reference mirror transducer, is applied. Due to hardware realization of the fringe contrast analysis, independently in each pixel, the time of measurement decreases significantly. High speed processing together with compact design allows that profilometer to be used as the portable device for both in and out door measurements. The capabilities of the designed profilometer are well illustrated by a few application examples.

  2. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  3. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves.

  4. Design and fabrication of MEMS-based thermally-actuated image stabilizer for cell phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    A micro-electro-mechanical system (MEMS)-based image stabilizer is proposed to counteracting shaking in cell phone cameras. The proposed stabilizer (dimensions, 8.8 × 8.8 × 0.2 mm3) includes a two-axis decoupling XY stage and has sufficient strength to suspend an image sensor (IS) used for anti-shaking function. The XY stage is designed to send electrical signals from the suspended IS by using eight signal springs and 24 signal outputs. The maximum actuating distance of the stage is larger than 25 μm, which is sufficient to resolve the shaking problem. Accordingly, the applied voltage for the 25 μm moving distance is lower than 20 V; the dynamic resonant frequency of the actuating device is 4485 Hz, and the rising time is 21 ms.

  5. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  6. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology

    PubMed Central

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  7. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  8. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  9. Estimating the spatial position of marine mammals based on digital camera recordings.

    PubMed

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-02-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator-prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  10. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  11. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V.

    2016-06-01

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ( τ c ) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ( τ c ˜ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  12. Improvement of the GRACE star camera data based on the revision of the combination method

    NASA Astrophysics Data System (ADS)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data

  13. An energy-optimized collimator design for a CZT-based SPECT camera

    PubMed Central

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2015-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radio-tracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which independent of the photon energy performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial collimators

  14. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    PubMed

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  15. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging.

    PubMed

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S; Gould, Robert G

    2011-10-10

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from -40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a (99m)Tc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at -25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of (99m)Tc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at -25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature.

  16. A pnCCD-based, fast direct single electron imaging camera for TEM and STEM

    NASA Astrophysics Data System (ADS)

    Ryll, H.; Simson, M.; Hartmann, R.; Holl, P.; Huth, M.; Ihle, S.; Kondo, Y.; Kotula, P.; Liebel, A.; Müller-Caspary, K.; Rosenauer, A.; Sagawa, R.; Schmidt, J.; Soltau, H.; Strüder, L.

    2016-04-01

    We report on a new camera that is based on a pnCCD sensor for applications in scanning transmission electron microscopy. Emerging new microscopy techniques demand improved detectors with regards to readout rate, sensitivity and radiation hardness, especially in scanning mode. The pnCCD is a 2D imaging sensor that meets these requirements. Its intrinsic radiation hardness permits direct detection of electrons. The pnCCD is read out at a rate of 1,150 frames per second with an image area of 264 x 264 pixel. In binning or windowing modes, the readout rate is increased almost linearly, for example to 4000 frames per second at 4× binning (264 x 66 pixel). Single electrons with energies from 300 keV down to 5 keV can be distinguished due to the high sensitivity of the detector. Three applications in scanning transmission electron microscopy are highlighted to demonstrate that the pnCCD satisfies experimental requirements, especially fast recording of 2D images. In the first application, 65536 2D diffraction patterns were recorded in 70 s. STEM images corresponding to intensities of various diffraction peaks were reconstructed. For the second application, the microscope was operated in a Lorentz-like mode. Magnetic domains were imaged in an area of 256 x 256 sample points in less than 37 seconds for a total of 65536 images each with 264 x 132 pixels. Due to information provided by the two-dimensional images, not only the amplitude but also the direction of the magnetic field could be determined. In the third application, millisecond images of a semiconductor nanostructure were recorded to determine the lattice strain in the sample. A speed-up in measurement time by a factor of 200 could be achieved compared to a previously used camera system.

  17. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    PubMed

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask. PMID:25402920

  18. Modelling Positron Interactions with Matter

    NASA Astrophysics Data System (ADS)

    Garcia, G.; Petrovic, Z.; White, R.; Buckman, S.

    2011-05-01

    In this work we link fundamental measurements of positron interactions with biomolecules, with the development of computer codes for positron transport and track structure calculations. We model positron transport in a medium from a knowledge of the fundamental scattering cross section for the atoms and molecules comprising the medium, combined with a transport analysis based on statistical mechanics and Monte-Carlo techniques. The accurate knowledge of the scattering is most important at low energies, a few tens of electron volts or less. The ultimate goal of this work is to do this in soft condensed matter, with a view to ultimately developing a dosimetry model for Positron Emission Tomography (PET). The high-energy positrons first emitted by a radionuclide in PET may well be described by standard formulas for energy loss of charged particles in matter, but it is incorrect to extrapolate these formulas to low energies. Likewise, using electron cross-sections to model positron transport at these low energies has been shown to be in serious error due to the effects of positronium formation. Work was supported by the Australian Research Council, the Serbian Government, and the Ministerio de Ciencia e Innovación, Spain.

  19. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera

    PubMed Central

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  20. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera.

    PubMed

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  1. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera.

    PubMed

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-03-30

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments.

  2. Precise calibration of linear camera equipped with cylindrical lenses using a radial basis function-based mapping technique.

    PubMed

    Liu, Haiqing; Yang, Linghui; Guo, Yin; Guan, Ruifen; Zhu, Jigui

    2015-02-01

    The linear camera equipped with cylindrical lenses has prominent advantages in high-precision coordinate measurement and dynamic position-tracking. However, the serious distortion of the cylindrical lenses limits the application of this camera. To overcome this obstacle, a precise two-step calibration method is developed. In the first step, a radial basis function-based (RBF-based) mapping technique is employed to recover the projection mapping of the imaging system by interpolating the correspondence between incident rays and image points. For an object point in 3D space, the plane passing through the object point in camera coordinate frame can be calculated accurately by this technique. The second step is the calibration of extrinsic parameters, which realizes the coordinate transformation from the camera coordinate frame to world coordinate frame. The proposed method has three aspects of advantage. Firstly, this method (black box calibration) is still effective even if the distortion is high and asymmetric. Secondly, the coupling between extrinsic parameters and other parameters, which is normally occurred and may lead to the failure of calibration, is avoided because this method simplifies the pinhole model and only extrinsic parameters are concerned in the simplified model. Thirdly, the nonlinear optimization, which is widely used to refine camera parameters, is better conditioned since fewer parameters are needed and more accurate initial iteration value is estimated. Both simulative and real experiments have been carried out and good results have been obtained.

  3. Multi-Kinect v2 Camera Based Monitoring System for Radiotherapy Patient Safety.

    PubMed

    Santhanam, Anand P; Min, Yugang; Kupelian, Patrick; Low, Daniel

    2016-01-01

    3D kinect camera systems are essential for real-time imaging of 3D treatment space that consists of both the patient anatomy as well as the treatment equipment setup. In this paper, we present the technical details of a 3D treatment room monitoring system that employs a scalable number of calibrated and coregistered Kinect v2 cameras. The monitoring system tracks radiation gantry and treatment couch positions, and tracks the patient and immobilization accessories. The number and positions of the cameras were selected to avoid line-of-sight issues and to adequately cover the treatment setup. The cameras were calibrated with a calibration error of 0.1 mm. Our tracking system evaluation show that both gantry and patient motion could be acquired at a rate of 30 frames per second. The transformations between the cameras yielded a 3D treatment space accuracy of < 2 mm error in a radiotherapy setup within 500mm around the isocenter. PMID:27046604

  4. Camera-based platform and sensor motion tracking for data fusion in a landmine detection system

    NASA Astrophysics Data System (ADS)

    van der Mark, Wannes; van den Heuvel, Johan C.; den Breejen, Eric; Groen, Frans C. A.

    2003-09-01

    Vehicles that serve in the role as landmine detection robots could be an important tool for demining former conflict areas. On the LOTUS platform for humanitarian demining, different sensors are used to detect a wide range of landmine types. Reliable and accurate detection depends on correctly combining the observations from the different sensors on the moving platform. Currently a method based on odometry is used to merge the readings from the sensors. In this paper a vision based approach is presented which can estimate the relative sensor pose and position together with the vehicle motion. To estimate the relative position and orientation of sensors, techniques from camera calibration are used. The platform motion is estimated from tracked features on the ground. A new approach is presented which can reduce the influence of tracking errors or other outliers on the accuracy of the ego-motion estimate. Overall, the new vision based approach for sensor localization leads to better estimates then the current odometry based method.

  5. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  6. Uas Based Tree Species Identification Using the Novel FPI Based Hyperspectral Cameras in Visible, NIR and SWIR Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Honkavaara, E.; Tuominen, S.; Saari, H.; Pölönen, I.; Hakala, T.; Viljanen, N.; Soukkamäki, J.; Näkki, I.; Ojanen, H.; Reinikainen, J.

    2016-06-01

    Unmanned airborne systems (UAS) based remote sensing offers flexible tool for environmental monitoring. Novel lightweight Fabry-Perot interferometer (FPI) based, frame format, hyperspectral imaging in the spectral range from 400 to 1600 nm was used for identifying different species of trees in a forest area. To the best of the authors' knowledge, this was the first research where stereoscopic, hyperspectral VIS, NIR, SWIR data is collected for tree species identification using UAS. The first results of the analysis based on fusion of two FPI-based hyperspectral imagers and RGB camera showed that the novel FPI hyperspectral technology provided accurate geometric, radiometric and spectral information in a forested scene and is operational for environmental remote sensing applications.

  7. Polarization encoded color camera.

    PubMed

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  8. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  9. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  10. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  11. A novel camera type for very high energy gamma-ray astronomy based on Geiger-mode avalanche photodiodes

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, S.; Commichau, V.; Dorner, D.; Gendotti, A.; Grimm, O.; von Gunten, H.; Hildebrand, D.; Horisberger, U.; Krähenbühl, T.; Kranich, D.; Lorenz, E.; Lustermann, W.; Mannheim, K.; Neise, D.; Pauss, F.; Renker, D.; Rhode, W.; Rissi, M.; Röser, U.; Rollke, S.; Stark, L. S.; Stucki, J.-P.; Viertel, G.; Vogler, P.; Weitzel, Q.

    2009-10-01

    Geiger-mode avalanche photodiodes (G-APD) are promising new sensors for light detection in atmospheric Cherenkov telescopes. In this paper, the design and commissioning of a 36-pixel G-APD prototype camera is presented. The data acquisition is based on the Domino Ring Sampling (DRS2) chip. A sub-nanosecond time resolution has been achieved. Cosmic-ray induced air showers have been recorded using an imaging mirror setup, in a self-triggered mode. This is the first time that such measurements have been carried out with a complete G-APD camera.

  12. Ventilation/Perfusion Positron Emission Tomography—Based Assessment of Radiation Injury to Lung

    SciTech Connect

    Siva, Shankar; Hardcastle, Nicholas; Kron, Tomas; Bressel, Mathias; Callahan, Jason; MacManus, Michael P.; Shaw, Mark; Plumridge, Nikki; Hicks, Rodney J.; Steinfort, Daniel; Ball, David L.; Hofman, Michael S.

    2015-10-01

    Purpose: To investigate {sup 68}Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). Methods and Materials: In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation and perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). Results: A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r{sup 2}=0.99, P<.01), with ventilation strongly negatively linear (r{sup 2}=0.95, P<.01). Logistic models did not provide a better fit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Conclusions: Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET

  13. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    NASA Astrophysics Data System (ADS)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  14. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  15. Camera Calibration Based on Perspective Geometry and Its Application in LDWS

    NASA Astrophysics Data System (ADS)

    Xu, Huarong; Wang, Xiaodong

    In this paper, we present a novel algorithm to calibrate cameras for lane departure warning system(LDWS). The algorithm only need a set of parallel lane markings and parallel lines perpendicular to the ground plane to determine the camera parameters such as the roll angle, the tilt angle, the pan angle and the focal length. Then with the camera height, the positions of objects in world space can be easily obtained from the image. We apply the proposed method to our lane departure warning system which monitors the distance between the car and road boundaries. Experiments show that the proposed method is easy to operate, and can achieve accurate results.

  16. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  17. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  18. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  19. Secondary caries detection with a novel fluorescence-based camera system in vitro

    NASA Astrophysics Data System (ADS)

    Brede, Olivier; Wilde, Claudia; Krause, Felix; Frentzen, Matthias; Braun, Andreas

    2010-02-01

    The aim of the study was to assess the ability of a fluorescence based optical system to detect secondary caries. The optical detecting system (VistaProof) illuminates the tooth surfaces with blue light emitted by high power GaN-LEDs at 405 nm. Employing this almost monochromatic excitation, fluorescence is analyzed using a RGB camera chip and encoded in color graduations (blue - red - orange - yellow) by a software (DBSWIN), indicating the degree of caries destruction. 31 freshly extracted teeth with existing fillings and secondary caries were cleaned, excavated and refilled with the same kind of restorative material. 19 of them were refilled with amalgam, 12 were refilled with a composite resin. Each step was analyzed with the respective software and analyzed statistically. Differences were considered as statistically significant at p<0.05. There was no difference between measurements at baseline and after cleaning (Mann Whitney, p>0.05). There was a significant difference between baseline measurements of the teeth primarily filled with composite resins and the refilled situation (p=0.014). There was also a significant difference between the non-excavated and the excavated group (Composite p=0.006, Amalgam p=0.018). The in vitro study showed, that the fluorescence based system allows detecting secondary caries next to composite resin fillings but not next to amalgam restorations. Cleaning of the teeth is not necessary, if there is no visible plaque. Further studies have to show, whether the system shows the same promising results in vivo.

  20. A high-sensitivity 2x2 multi-aperture color camera based on selective averaging

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Kagawa, Keiichiro; Takasawa, Taishi; Seo, Min-Woong; Yasutomi, Keita; Kawahito, Shoji

    2015-03-01

    To demonstrate the low-noise performance of the multi-aperture imaging system using a selective averaging method, an ultra-high-sensitivity multi-aperture color camera with 2×2 apertures is being developed. In low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible, which greatly degrades the quality of the image. To reduce these kinds of noise as well as to increase the number of incident photons, the multi-aperture imaging system composed of an array of lens and CMOS image sensor (CIS), and the selective averaging for minimizing the synthetic sensor noise at every pixel is utilized. It is verified by simulation that the effective noise at the peak of noise histogram is reduced from 1.44 e- to 0.73 e- in a 2×2-aperture system, where RTS noise and dark current white defects have been successfully removed. In this work, a prototype based on low-noise color sensors with 1280×1024 pixels fabricated in 0.18um CIS technology is considered. The pixel pitch is 7.1μm×7.1μm. The noise of the sensor is around 1e- based on the folding-integration and cyclic column ADCs, and the low voltage differential signaling (LVDS) is used to improve the noise immunity. The synthetic F-number of the prototype is 0.6.

  1. A wearable smartphone-enabled camera-based system for gait assessment.

    PubMed

    Kim, Albert; Kim, Junyoung; Rietdyk, Shirley; Ziaie, Babak

    2015-07-01

    Quantitative assessment of gait parameters provides valuable diagnostic and prognostic information. However, most gait analysis systems are bulky, expensive, and designed to be used indoors or in laboratory settings. Recently, wearable systems have attracted considerable attention due to their lower cost and portability. In this paper, we present a simple wearable smartphone-enabled camera-based system (SmartGait) for measurement of spatiotemporal gait parameters. We assess the concurrent validity of SmartGait as compared to a commercially available pressure-sensing walkway (GaitRite). Fifteen healthy young adults (25.8± 2.6 years) were instructed to walk at slow, preferred, and fast speed. The measures of step length (SL), step width (SW), step time (ST), gait speed, double support time (DS) and their variability were assessed for agreement between the two systems; absolute error and intra-class correlation coefficients (ICC) were determined. Measured gait parameters had modest to excellent agreements (ICCs between 0.731 and 0.982). Overall, SmartGait provides many advantages and is a strong alternative wearable system for laboratory and community-based gait assessment. PMID:26059484

  2. New Lower-Limb Gait Asymmetry Indices Based on a Depth Camera

    PubMed Central

    Auvinet, Edouard; Multon, Franck; Meunier, Jean

    2015-01-01

    Background: Various asymmetry indices have been proposed to compare the spatiotemporal, kinematic and kinetic parameters of lower limbs during the gait cycle. However, these indices rely on gait measurement systems that are costly and generally require manual examination, calibration procedures and the precise placement of sensors/markers on the body of the patient. Methods: To overcome these issues, this paper proposes a new asymmetry index, which uses an inexpensive, easy-to-use and markerless depth camera (Microsoft Kinect™) output. This asymmetry index directly uses depth images provided by the Kinect™ without requiring joint localization. It is based on the longitudinal spatial difference between lower-limb movements during the gait cycle. To evaluate the relevance of this index, fifteen healthy subjects were tested on a treadmill walking normally and then via an artificially-induced gait asymmetry with a thick sole placed under one shoe. The gait movement was simultaneously recorded using a Kinect™ placed in front of the subject and a motion capture system. Results: The proposed longitudinal index distinguished asymmetrical gait (p < 0.001), while other symmetry indices based on spatiotemporal gait parameters failed using such Kinect™ skeleton measurements. Moreover, the correlation coefficient between this index measured by Kinect™ and the ground truth of this index measured by motion capture is 0.968. Conclusion: This gait asymmetry index measured with a Kinect™ is low cost, easy to use and is a promising development for clinical gait analysis. PMID:25719863

  3. Product quality-based eco-efficiency applied to digital cameras.

    PubMed

    Park, Pil-Ju; Tahara, Kiyotaka; Inaba, Atsushi

    2007-04-01

    When calculating eco-efficiency, there are considerable confusion and controversy about what the product value is and how it should be quantified. We have proposed here a quantification method for eco-efficiency that derives the ratio of the multiplication value of the product quality and the life span of a product to its whole environmental impact based on Life Cycle Assessment (LCA). In this study, product quality was used as the product value and quantified by the following three steps: (1) normalization based on a value function, (2) determination of the subjective weighting factors of the attributes, and (3) calculation of product quality of the chosen products. The applicability of the proposed method to an actual product was evaluated using digital cameras. The results show that the eco-efficiency values of products equipped with rechargeable batteries were higher than those products that use alkaline batteries, because of higher quality values and lower environmental impacts. The sensitivity analysis shows that the proposed method was superior to the existing methods, because it enables to identify the quality level of the chosen products by considering all products that have the same functions in the market and because, when adding a new product, the calculated quality values in the proposed method do not have to be changed.

  4. Positron emission tomography-based boron neutron capture therapy using boronophenylalanine for high-grade gliomas: part II.

    PubMed

    Imahori, Y; Ueda, S; Ohmori, Y; Sakae, K; Kusuki, T; Kobayashi, T; Takagaki, M; Ono, K; Ido, T; Fujii, R

    1998-08-01

    Based on pharmacokinetic findings of fluorine-18-labeled L-fluoroboronophenylalanine by positron emission tomography (PET), methods for estimating tumor 10B concentration were devised. In clinical practice of boron neutron capture therapy (BNCT) for high-grade gliomas, a large amount of L-boronophenylalanine (L-10B-BPA)-fructose solution is used. Under these conditions, a slow i.v. infusion of L-10B-BPA-fructose solution should be performed for BNCT; therefore, the changes over time in 10B concentration in the target tissue were estimated by convoluting the actual time course of changes in plasma 10B concentration with a PET-based weight function including the proper rate constants [K1 (ml/g/min), k2 (min(-1)), k3 (min(-1)), and k4 (min(-1))]. With this method, the estimated values of 10B concentration in gliomas were very close to the 10B levels in surgical specimens. This demonstrated the similarity in pharmacokinetics between fluorine-18-labeled L-fluoroboronophenylalanine and L-10B-BPA. This method, using the appropriate rate constant, permits the determination of tumor 10B concentration and is widely suitable for clinical BNCT, because the averaged PET data are enough to use in future patients without individual PET study.

  5. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter

  6. KEK-IMSS Slow Positron Facility

    NASA Astrophysics Data System (ADS)

    Hyodo, T.; Wada, K.; Yagishita, A.; Kosuge, T.; Saito, Y.; Kurihara, T.; Kikuchi, T.; Shirakawa, A.; Sanami, T.; Ikeda, M.; Ohsawa, S.; Kakihara, K.; Shidara, T.

    2011-12-01

    The Slow Positron Facility at the Institute of Material Structure Science (IMSS) of High Energy Accelerator Research Organization (KEK) is a user dedicated facility with an energy tunable (0.1 - 35 keV) slow positron beam produced by a dedicated 55MeV linac. The present beam line branches have been used for the positronium time-of-flight (Ps-TOF) measurements, the transmission positron microscope (TPM) and the photo-detachment of Ps negative ions (Ps-). During the year 2010, a reflection high-energy positron diffraction (RHEPD) measurement station is going to be installed. The slow positron generator (converter/ moderator) system will be modified to get a higher slow positron intensity, and a new user-friendly beam line power-supply control and vacuum monitoring system is being developed. Another plan for this year is the transfer of a 22Na-based slow positron beam from RIKEN. This machine will be used for the continuous slow positron beam applications and for the orientation training of those who are interested in beginning researches with a slow positron beam.

  7. Potential of Uav-Based Laser Scanner and Multispectral Camera Data in Building Inspection

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Weller, C.

    2016-06-01

    Conventional building inspection of bridges, dams or large constructions in general is rather time consuming and often cost expensive due to traffic closures and the need of special heavy vehicles such as under-bridge inspection units or other large lifting platforms. In consideration that, an unmanned aerial vehicle (UAV) will be more reliable and efficient as well as less expensive and simpler to operate. The utilisation of UAVs as an assisting tool in building inspections is obviously. Furthermore, light-weight special sensors such as infrared and thermal cameras as well as laser scanner are available and predestined for usage on unmanned aircraft systems. Such a flexible low-cost system is realized in the ADFEX project with the goal of time-efficient object exploration, monitoring and damage detection. For this purpose, a fleet of UAVs, equipped with several sensors for navigation, obstacle avoidance and 3D object-data acquisition, has been developed and constructed. This contribution deals with the potential of UAV-based data in building inspection. Therefore, an overview of the ADFEX project, sensor specifications and requirements of building inspections in general are given. On the basis of results achieved in practical studies, the applicability and potential of the UAV system in building inspection will be presented and discussed.

  8. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    PubMed

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-08-31

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  9. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    PubMed Central

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765

  10. A regional density distribution based wide dynamic range algorithm for infrared camera systems

    NASA Astrophysics Data System (ADS)

    Park, Gyuhee; Kim, Yongsung; Joung, Shichang; Shin, Sanghoon

    2014-10-01

    Forward Looking InfraRed (FLIR) imaging system has been widely used for both military and civilian purposes. Military applications include target acquisition and tracking, night vision system. Civilian applications include thermal efficiency analysis, short-ranged wireless communication, weather forecasting and other various applications. The dynamic range of FLIR imaging system is larger than one of commercial display. Generally, auto gain controlling and contrast enhancement algorithm are applied to FLIR imaging system. In IR imaging system, histogram equalization and plateau equalization is generally used for contrast enhancement. However, they have no solution about the excessive enhancing when luminance histogram has been distributed in specific narrow region. In this paper, we proposed a Regional Density Distribution based Wide Dynamic Range algorithm for Infrared Camera Systems. Depending on the way of implementation, the result of WDR is quite different. Our approach is single frame type WDR algorithm for enhancing the contrast of both dark and white detail without loss of bins of histogram with real-time processing. The significant change in luminance caused by conventional contrast enhancement methods may introduce luminance saturation and failure in object tracking. Proposed method guarantees both the effective enhancing in contrast and successive object tracking. Moreover, since proposed method does not using multiple images on WDR, computation complexity might be significantly reduced in software / hardware implementation. The experimental results show that proposed method has better performance compared with conventional Contrast enhancement methods.

  11. Two Cloud-Based Cues for Estimating Scene Structure and Camera Calibration.

    PubMed

    Jacobs, Nathan; Abrams, Austin; Pless, Robert

    2013-03-01

    We describe algorithms that use cloud shadows as a form of stochastically structured light to support 3D scene geometry estimation. Taking video captured from a static outdoor camera as input, we use the relationship of the time series of intensity values between pairs of pixels as the primary input to our algorithms. We describe two cues that relate the 3D distance between a pair of points to the pair of intensity time series. The first cue results from the fact that two pixels that are nearby in the world are more likely to be under a cloud at the same time than two distant clouds. We describe methods for using this cue to estimate focal length and scene structure. The second cue is based on the motion of shadow clouds across the scene; this cue results in a set of linear constraints on scene structure. These constraints have an inherent ambiguity, which we show how to overcome by combining the cloud motion cue with the spatial cue. We evaluate our method on several time lapses of real outdoor scenes.

  12. Single camera absolute motion based digital elevation mapping for a next generation planetary lander

    NASA Astrophysics Data System (ADS)

    Feetham, Luke M.; Aouf, Nabil; Bourdarias, Clement; Voirin, Thomas

    2014-05-01

    Robotic planetary surface exploration missions are becoming much more ambitious in their science goals as they attempt to answer the bigger questions relating to the possibility of life elsewhere in our solar system. Answering these questions will require scientifically rich landing sites. Such sites are unlikely to be located in relatively flat regions that are free from hazards, therefore there is a growing need for next generation entry descent and landing systems to possess highly sophisticated navigation capabilities coupled with active hazard avoidance that can enable a pin-point landing. As a first step towards achieving these goals, a multi-source, multi-rate data fusion algorithm is presented that combines single camera recursive feature-based structure from motion (SfM) estimates with measurements from an inertial measurement unit in order to overcome the scale ambiguity problem by directly estimating the unknown scale factor. This paper focuses on accurate estimation of absolute motion parameters, as well as the estimation of sparse landing site structure to provide a starting point for hazard detection. We assume no prior knowledge of the landing site terrain structure or of the landing craft motion in order to fully assess the capabilities of the proposed algorithm to allow a pin-point landing on distant solar system bodies where accurate knowledge of the desired landing site may be limited. We present results using representative synthetic images of deliberately challenging landing scenarios, which demonstrates that the proposed method has great potential.

  13. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  14. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  15. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    PubMed

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765

  16. Colorimetric analyzer based on mobile phone camera for determination of available phosphorus in soil.

    PubMed

    Moonrungsee, Nuntaporn; Pencharee, Somkid; Jakmunee, Jaroon

    2015-05-01

    A field deployable colorimetric analyzer based on an "Android mobile phone" was developed for the determination of available phosphorus content in soil. An inexpensive mobile phone embedded with digital camera was used for taking photograph of the chemical solution under test. The method involved a reaction of the phosphorus (orthophosphate form), ammonium molybdate and potassium antimonyl tartrate to form phosphomolybdic acid which was reduced by ascorbic acid to produce the intense colored molybdenum blue. The software program was developed to use with the phone for recording and analyzing RGB color of the picture. A light tight box with LED light to control illumination was fabricated to improve precision and accuracy of the measurement. Under the optimum conditions, the calibration graph was created by measuring blue color intensity of a series of standard phosphorus solution (0.0-1.0mgPL(-1)), then, the calibration equation obtained was retained by the program for the analysis of sample solution. The results obtained from the proposed method agreed well with the spectrophotometric method, with a detection limit of 0.01mgPL(-1) and a sample throughput about 40h(-1) was achieved. The developed system provided good accuracy (RE<5%) and precision (RSD<2%, intra- and inter-day), fast and cheap analysis, and especially convenient to use in crop field for soil analysis of phosphorus nutrient.

  17. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ASC's HRC, and even WFC3/UVIS.

  18. Realization of the FPGA based TDI algorithm in digital domain for CMOS cameras

    NASA Astrophysics Data System (ADS)

    Tao, Shuping; Jin, Guang; Zhang, Xuyan; Qu, Hongsong

    2012-10-01

    In order to make the CMOS image sensors suitable for space high resolution imaging applications, a new method realizing TDI in digital domain by FPGA is proposed in this paper, which improves the imaging mode for area array CMOS sensors. The TDI algorithm accumulates the corresponding pixels of adjoining frames in digital domain, so the gray values increase by M times, where M is for the integration number, and the image's quality in signal-to-noise ratio can be improved. In addition, the TDI optimization algorithm is discussed. Firstly, the signal storage is optimized by 2 slices of external RAM, where memory depth expanding and the table tennis operation mechanism are used. Secondly, the FIFO operation mechanism reduces the reading and writing operation on memory by M×(M-1) times, It saves so much signal transfer time as is proportional to the square of integration number M2, that the frame frequency is able to increase greatly. At last, the CMOS camera based on TDI in digital domain is developed, and the algorithm is validated by experiments on it.

  19. Image-based correction of the light dilution effect for SO2 camera measurements

    NASA Astrophysics Data System (ADS)

    Campion, Robin; Delgado-Granados, Hugo; Mori, Toshiya

    2015-07-01

    Ultraviolet SO2 cameras are increasingly used in volcanology because of their ability to remotely measure the 2D distribution of SO2 in volcanic plumes, at a high frequency. However, light dilution, i.e., the scattering of ambient photons within the instrument's field of view (FoV) on air parcels located between the plume and the instrument, induces a systematic underestimation of the measurements, whose magnitude increases with distance, SO2 content, atmospheric pressure and turbidity. Here we describe a robust and straightforward method to quantify and correct this effect. We retrieve atmospheric scattering coefficients based on the contrast attenuation between the sky and the increasingly distant slope of the volcanic edifice. We illustrate our method with a case study at Etna volcano, where difference between corrected and uncorrected emission rates amounts to 40% to 80%, and investigate the temporal variations of the scattering coefficient during 1 h of measurements on Etna. We validate the correction method at Popocatépetl volcano by performing measurements of the same plume at different distances from the volcano. Finally, we reported the atmospheric scattering coefficients for several volcanoes at different latitudes and altitudes.

  20. Hyperspectral characterization of fluorophore diffusion in human skin using a sCMOS based hyperspectral camera

    NASA Astrophysics Data System (ADS)

    Hernandez-Palacios, J.; Haug, I. J.; Grimstad, Ø.; Randeberg, L. L.

    2011-07-01

    Hyperspectral fluorescence imaging is a modality combining high spatial and spectral resolution with increased sensitivity for low photon counts. The main objective of the current study was to investigate if this technique is a suitable tool for characterization of diffusion properties in human skin. This was done by imaging fluorescence from Alexa 488 in ex vivo human skin samples using an sCMOS based hyperspectral camera. Pre-treatment with acetone, DMSO and mechanical micro-needling of the stratum corneum created variation in epidermal permeability between the measured samples. Selected samples were also stained using fluorescence labelled biopolymers. The effect of fluorescence enhancers on transdermal diffusion could be documented from the collected data. Acetone was found to have an enhancing effect on the transport, and the results indicate that the biopolymers might have a similar effect, The enhancement from these compounds were not as prominent as the effect of mechanical penetration of the sample using a micro-needling device. Hyperspectral fluorescence imaging has thus been proven to be an interesting tool for characterization of fluorophore diffusion in ex vivo skin samples. Further work will include repetition of the measurements in a shorter time scale and mathematical modeling of the diffusion process to determine the diffusivity in skin for the compounds in question.

  1. A study of defects in iron-based binary alloys by the Mössbauer and positron annihilation spectroscopies

    SciTech Connect

    Idczak, R. Konieczny, R.; Chojcan, J.

    2014-03-14

    The room temperature positron annihilation lifetime spectra and {sup 57}Fe Mössbauer spectra were measured for pure Fe as well as for iron-based Fe{sub 1−x}Re{sub x}, Fe{sub 1−x}Os{sub x}, Fe{sub 1−x}Mo{sub x}, and Fe{sub 1−x}Cr{sub x} solid solutions, where x is in the range between 0.01 and 0.05. The measurements were performed in order to check if the known from the literature, theoretical calculations on the interactions between vacancies and solute atoms in iron can be supported by the experimental data. The vacancies were created during formation and further mechanical processing of the iron systems under consideration so the spectra mentioned above were collected at least twice for each studied sample synthesized in an arc furnace— after cold rolling to the thickness of about 40 μm as well as after subsequent annealing at 1270 K for 2 h. It was found that only in Fe and the Fe-Cr system the isolated vacancies thermally generated at high temperatures are not observed at the room temperature and cold rolling of the materials leads to creation of another type of vacancies which were associated with edge dislocations. In the case of other cold-rolled systems, positrons detect vacancies of two types mentioned above and Mössbauer nuclei “see” the vacancies mainly in the vicinity of non-iron atoms. This speaks in favour of the suggestion that in iron matrix the solute atoms of Os, Re, and Mo interact attractively with vacancies as it is predicted by theoretical computations and the energy of the interaction is large enough for existing the pairs vacancy-solute atom at the room temperature. On the other hand, the corresponding interaction for Cr atoms is either repulsive or attractive but smaller than that for Os, Re, and Mo atoms. The latter is in agreement with the theoretical calculations.

  2. Cloud phase identification based on brightness temperatures provided by the bi-spectral IR Camera of JEM-EUSO Mission

    NASA Astrophysics Data System (ADS)

    de Castro, Antonio J.; Briz, Susana; Fernández-Gómez, Isabel; Rodríguez, Irene; López, Fernando

    2015-03-01

    Cloud information is extremely important to correctly interpret the JEM-EUSO telescope data since UV radiation coming from the Extensive Air Shower can be partially absorbed or reflected by clouds. In order to observe the atmosphere and clouds in the field of view of the UV telescope the JEM-EUSO system will include an Atmospheric Monitoring System, which consists of a LIDAR and an IR Camera. Until now several radiative algorithms have been developed to retrieve the cloud top temperature from the brightness temperatures (BT) that the IR Camera will provide in two IR spectral bands (10.8 and 12 μm). In some cases the performance of the algorithms depends on cloud phase: water, ice or mixed. For this reason the identification of the cloud phase is valuable information for the correct interpretation of the cloud temperatures retrieved by radiative algorithms. Some previous proposals based on brightness temperature differences (BTD) have revealed that it is not easy to determine unambiguously the phase. In this work we present criteria to retrieve the cloud phase based on IR Camera BTDs. It has been checked with MODIS images to evaluate the possibilities to identify cloud phase with the JEM-EUSO IR Camera.

  3. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  4. Development of a non-delay line constant fraction discriminator based on the Padé approximant for time-of-flight positron emission tomography scanners

    NASA Astrophysics Data System (ADS)

    Kim, S. Y.; Ko, G. B.; Kwon, S. I.; Lee, J. S.

    2015-01-01

    In positron emission tomography, the constant fraction discriminator (CFD) circuit is used to acquire accurate arrival times for the annihilation photons with minimum sensitivity to time walk. As the number of readout channels increases, it becomes difficult to use conventional CFDs because of the large amount of space required for the delay line part of the circuit. To make the CFD compact, flexible, and easily controllable, a non-delay-line CFD based on the Padé approximant is proposed. The non-delay-line CFD developed in this study is shown to have timing performance that is similar to that of a conventional delay-line-based CFD in terms of the coincidence resolving time of a fast photomultiplier tube detector. This CFD can easily be applied to various positron emission tomography system designs that contain high-density detectors with multi-channel structures.

  5. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-03-19

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  6. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  7. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi R.

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ACS's HRC, and even WFC3/UVIS. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  8. A field-based technique for the longitudinal profiling of ultrarelativistic electron or positron bunches down to lengths of {le}10 microns

    SciTech Connect

    Tatchyn, R.

    1993-05-01

    Present and future generations of particle accelerating and storage machines are expected to develop ever-decreasing electron/positron bunch lengths, down to 100 {mu} and beyond. In this paper a method for measuring the longitudinal profiles of ultrashort (1000 {mu} {approx} 10 {mu}) bunches, based on: (1) the extreme field compaction attained by ultrarelativistic particles, and (2) the reduction of the group velocity of a visible light pulse in a suitably-chosen dielectric medium, is outline.

  9. Instrumentation optimization for positron emission mammography

    SciTech Connect

    Moses, William W.; Qi, Jinyi

    2003-06-05

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast.

  10. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  11. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  12. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  13. Fast time-lens-based line-scan single-pixel camera with multi-wavelength source

    PubMed Central

    Guo, Qiang; Chen, Hongwei; Weng, Zhiliang; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2015-01-01

    A fast time-lens-based line-scan single-pixel camera with multi-wavelength source is proposed and experimentally demonstrated in this paper. A multi-wavelength laser instead of a mode-locked laser is used as the optical source. With a diffraction grating and dispersion compensating fibers, the spatial information of an object is converted into temporal waveforms which are then randomly encoded, temporally compressed and captured by a single-pixel photodetector. Two algorithms (the dictionary learning algorithm and the discrete cosine transform-based algorithm) for image reconstruction are employed, respectively. Results show that the dictionary learning algorithm has greater capability to reduce the number of compressive measurements than the DCT-based algorithm. The effective imaging frame rate increases from 200 kHz to 1 MHz, which shows a significant improvement in imaging speed over conventional single-pixel cameras. PMID:26417527

  14. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  15. Microcontroller-based intelligent low-cost-linear-sensor-camera for general edge detection

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Justen, Detlef

    1997-09-01

    With this paper we would like to present an intelligent low- cost-camera. Intelligent means that a microcontroller does all the controlling and provides several in- and outputs. The camera is a stand-alone system. The basic element of the camera is a linear sensor that consists of a photodiode array (PDA). In comparison with standard CCD-chips this type of sensor is a low cost component and its operation is very simple. Furthermore this paper shows the mechanical, electrical and electro-optical differences between CCD- and PDA-sensors. So the reader will be able to choose the right sensor for a particular task. Two cases of industrial applications are listed at the end of this paper.

  16. Performances of a solid streak camera based on conventional CCD with nanosecond time resolution

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Bai, Yonglin; Zhu, Bingli; Gou, Yongsheng; Xu, Peng; Bai, XiaoHong; Liu, Baiyu; Qin, Junjun

    2015-02-01

    Imaging systems with high temporal resolution are needed to study rapid physical phenomena ranging from shock waves, including extracorporeal shock waves used for surgery, to diagnostics of laser fusion and fuel injection in internal combustion engines. However, conventional streak cameras use a vacuum tube making thus fragile, cumbersome and expensive. Here we report an CMOS streak camera project consists in reproducing completely this streak camera functionality with a single CMOS chip. By changing the mode of charge transfer of CMOS image sensor, fast photoelectric diagnostics of single point with linear CMOS and high-speed line scanning with array CMOS sensor can be achieved respectively. A fast photoelectric diagnostics system has been designed and fabricated to investigate the feasibility of this method. Finally, the dynamic operation of the sensors is exposed. Measurements show a sample time of 500 ps and a time resolution better than 2 ns.

  17. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  18. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  19. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  20. Direct electronic linearization for camera-based spectral domain optical coherence tomography.

    PubMed

    Payne, Andrew; Podoleanu, Adrian Gh

    2012-06-15

    An electronic method of k-space linearization for an analog camera for use in optical coherence tomography is demonstrated. The method applies a chirp to the data transfer clock signal of the camera in order to temporally compensate for diffraction that is nonlinear in wavenumber. The optimum parameters are obtained experimentally and theoretically and are shown to be in good accordance. Close to maximum measurable axial range, by applying this method, the FWHM of the point spread function is reduced by a factor of 5.6 and sensitivity is increased by 9.8 dB. PMID:22739929

  1. An upgraded camera-based imaging system for mapping venous blood oxygenation in human skin tissue

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhang, Xiao; Qiu, Lina; Leotta, Daniel F.

    2016-07-01

    A camera-based imaging system was previously developed for mapping venous blood oxygenation in human skin. However, several limitations were realized in later applications, which could lead to either significant bias in the estimated oxygen saturation value or poor spatial resolution in the map of the oxygen saturation. To overcome these issues, an upgraded system was developed using improved modeling and image processing algorithms. In the modeling, Monte Carlo (MC) simulation was used to verify the effectiveness of the ratio-to-ratio method for semi-infinite and two-layer skin models, and then the relationship between the venous oxygen saturation and the ratio-to-ratio was determined. The improved image processing algorithms included surface curvature correction and motion compensation. The curvature correction is necessary when the imaged skin surface is uneven. The motion compensation is critical for the imaging system because surface motion is inevitable when the venous volume alteration is induced by cuff inflation. In addition to the modeling and image processing algorithms in the upgraded system, a ring light guide was used to achieve perpendicular and uniform incidence of light. Cross-polarization detection was also adopted to suppress surface specular reflection. The upgraded system was applied to mapping of venous oxygen saturation in the palm, opisthenar and forearm of human subjects. The spatial resolution of the oxygenation map achieved is much better than that of the original system. In addition, the mean values of the venous oxygen saturation for the three locations were verified with a commercial near-infrared spectroscopy system and were consistent with previously published data.

  2. Computer-vision-based weed identification of images acquired by 3CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; He, Yong; Fang, Hui

    2006-09-01

    Selective application of herbicide to weeds at an earlier stage in crop growth is an important aspect of site-specific management of field crops. For approaches more adaptive in developing the on-line weed detecting application, more researchers involves in studies on image processing techniques for intensive computation and feature extraction tasks to identify the weeds from the other crops and soil background. This paper investigated the potentiality of applying the digital images acquired by the MegaPlus TM MS3100 3-CCD camera to segment the background soil from the plants in question and further recognize weeds from the crops using the Matlab script language. The image of the near-infrared waveband (center 800 nm; width 65 nm) was selected principally for segmenting soil and identifying the cottons from the thistles was achieved based on their respective relative area (pixel amount) in the whole image. The results show adequate recognition that the pixel proportion of soil, cotton leaves and thistle leaves were 78.24%(-0.20% deviation), 16.66% (+ 2.71% SD) and 4.68% (-4.19% SD). However, problems still exists by separating and allocating single plants for their clustering in the images. The information in the images acquired via the other two channels, i.e., the green and the red bands, need to be extracted to help the crop/weed discrimination. More optical specimens should be acquired for calibration and validation to establish the weed-detection model that could be effectively applied in fields.

  3. Undulator-Based Production of Polarized Positrons, A Proposal for the 50-GeV Beam in the FFTB

    SciTech Connect

    G. Alexander; P. Anthony; V. Bharadwaj; Yu.K. Batygin; T. Behnke; S. Berridge; G.R. Bower; W. Bugg; R. Carr; E. Chudakov; J.E. Clendenin; F.J. Decker; Yu. Efremenko; T. Fieguth; K. Flottmann; M. Fukuda; V. Gharibyan; T. Handler; T. Hirose; R.H. Iverson; Yu. Kamyshkov; H. Kolanoski; T. Lohse; Chang-guo Lu; K.T. McDonald; N. Meyners; R. Michaels; A.A. Mikhailichenko; K. Monig; G. Moortgat-Pick; M. Olson; T. Omori; D. Onoprienko; N. Pavel; R. Pitthan; M. Purohit; L. Rinolfi; K.P. Schuler; J.C. Sheppard; S. Spanier; A. Stahl; Z.M. Szalata; J. Turner; D. Walz; A. Weidemann; J. Weisend

    2003-06-01

    The full exploitation of the physics potential of future linear colliders such as the JLC, NLC, and TESLA will require the development of polarized positron beams. In the proposed scheme of Balakin and Mikhailichenko [1] a helical undulator is employed to generate photons of several MeV with circular polarization which are then converted in a relatively thin target to generate longitudinally polarized positrons. This experiment, E-166, proposes to test this scheme to determine whether such a technique can produce polarized positron beams of sufficient quality for use in future linear colliders. The experiment will install a meter-long, short-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) at SLAC. A low-emittance 50-GeV electron beam passing through this undulator will generate circularly polarized photons with energies up to 10 MeV. These polarized photons are then converted to polarized positrons via pair production in thin targets. Titanium and tungsten targets, which are both candidates for use in linear colliders, will be tested. The experiment will measure the flux and polarization of the undulator photons, and the spectrum and polarization of the positrons produced in the conversion target, and compare the measurement results to simulations. Thus the proposed experiment directly tests for the first time the validity of the simulation programs used for the physics of polarized pair production in finite matter, in particular the effects of multiple scattering on polarization. Successful comparison of the experimental results to the simulations will lead to greater confidence in the proposed designs of polarized positrons sources for the next generation of linear colliders. This experiment requests six-weeks of time in the FFTB beam line: three weeks for installation and setup and three weeks of beam for data taking. A 50-GeV beam with about twice the SLC emittance at a repetition rate of 30 Hz is required.

  4. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  5. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  6. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  7. Space Camera

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Nikon's F3 35mm camera was specially modified for use by Space Shuttle astronauts. The modification work produced a spinoff lubricant. Because lubricants in space have a tendency to migrate within the camera, Nikon conducted extensive development to produce nonmigratory lubricants; variations of these lubricants are used in the commercial F3, giving it better performance than conventional lubricants. Another spinoff is the coreless motor which allows the F3 to shoot 140 rolls of film on one set of batteries.

  8. Design and fabrication of a vacuum ultraviolet pinhole camera based on thin phosphor screens (abstract)

    NASA Astrophysics Data System (ADS)

    Baciero, A.; Zurro, B.; McCarthy, K. J.; De la Fuente, M. C.; Burgos, C.

    2001-01-01

    A compact and highly sensitive pinhole camera has been developed for acquiring broadband vacuum ultraviolet (VUV) emission profiles of plasmas in the TJ-II. Its principal purpose is to obtain profiles with sufficiently high resolution so as to aid in the search for topological structures in stellarator plasmas. It can also be used to support experiments such as impurity injection by laser ablation. The original and purpose-designed camera reported here provides optimum sensitivity over a broad spectral range. In the camera vacuum chamber, plasma radiation passes through a pinhole and a filter before impinging on a 5×30 mm area of a P-46 phosphor screen. Thin screens of this material were extensively characterized using calibrated monochromatic VUV sources and it was found that their response is maximized when operated in reflection mode.1 Luminescent light emitted from the vacuum side of the screen is then focused by a toroidal mirror (the pinhole is cut in its center) onto the outside of a quartz window which is mounted on the side of camera. Finally, this intermediate image is relayed onto the surface of a gated and intensified linear photodiode array (25 μm by 25 mm) having 700 active pixels. This system is capable of obtaining radial VUV profiles every 12 ms and of recording them in ⩾100 ns.

  9. Designed and Implementation of Lunar Immersive Visualization System Based on Chang'E-3 Data of Panoramic Camera

    NASA Astrophysics Data System (ADS)

    Gao, X. Y.; Liu, L. J.; Ren, X.; Mu, L. L.; Li, C. L.; Yan, W.; Wang, F. F.; Wang, W. R.; Zeng, X. G.

    2015-10-01

    In this paper, we present a lunar immersive visualization system that was developed for assisting lunar scientist to establish science mission goals in Chang'E-3 mission. Based on data of panoramic camera and star catalogue, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D in combination with 4 -pipe-projection system.

  10. Video-based realtime IMU-camera calibration for robot navigation

    NASA Astrophysics Data System (ADS)

    Petersen, Arne; Koch, Reinhard

    2012-06-01

    This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

  11. Comparison of - and Mutual Informaton Based Calibration of Terrestrial Laser Scanner and Digital Camera for Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Omidalizarandi, M.; Neumann, I.

    2015-12-01

    In the current state-of-the-art, geodetic deformation analysis of natural and artificial objects (e.g. dams, bridges,...) is an ongoing research in both static and kinematic mode and has received considerable interest by researchers and geodetic engineers. In this work, due to increasing the accuracy of geodetic deformation analysis, a terrestrial laser scanner (TLS; here the Zoller+Fröhlich IMAGER 5006) and a high resolution digital camera (Nikon D750) are integrated to complementarily benefit from each other. In order to optimally combine the acquired data of the hybrid sensor system, a highly accurate estimation of the extrinsic calibration parameters between TLS and digital camera is a vital preliminary step. Thus, the calibration of the aforementioned hybrid sensor system can be separated into three single calibrations: calibration of the camera, calibration of the TLS and extrinsic calibration between TLS and digital camera. In this research, we focus on highly accurate estimating extrinsic parameters between fused sensors and target- and targetless (mutual information) based methods are applied. In target-based calibration, different types of observations (image coordinates, TLS measurements and laser tracker measurements for validation) are utilized and variance component estimation is applied to optimally assign adequate weights to the observations. Space resection bundle adjustment based on the collinearity equations is solved using Gauss-Markov and Gauss-Helmert model. Statistical tests are performed to discard outliers and large residuals in the adjustment procedure. At the end, the two aforementioned approaches are compared and advantages and disadvantages of them are investigated and numerical results are presented and discussed.

  12. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  13. Compact pnCCD-based X-ray camera with high spatial and energy resolution: a color X-ray camera.

    PubMed

    Scharf, O; Ihle, S; Ordavo, I; Arkadiev, V; Bjeoumikhov, A; Bjeoumikhova, S; Buzanich, G; Gubzhokov, R; Günther, A; Hartmann, R; Kühbacher, M; Lang, M; Langhoff, N; Liebel, A; Radtke, M; Reinholz, U; Riesemeier, H; Soltau, H; Strüder, L; Thünemann, A F; Wedell, R

    2011-04-01

    For many applications there is a requirement for nondestructive analytical investigation of the elemental distribution in a sample. With the improvement of X-ray optics and spectroscopic X-ray imagers, full field X-ray fluorescence (FF-XRF) methods are feasible. A new device for high-resolution X-ray imaging, an energy and spatial resolving X-ray camera, is presented. The basic idea behind this so-called "color X-ray camera" (CXC) is to combine an energy dispersive array detector for X-rays, in this case a pnCCD, with polycapillary optics. Imaging is achieved using multiframe recording of the energy and the point of impact of single photons. The camera was tested using a laboratory 30 μm microfocus X-ray tube and synchrotron radiation from BESSY II at the BAMline facility. These experiments demonstrate the suitability of the camera for X-ray fluorescence analytics. The camera simultaneously records 69,696 spectra with an energy resolution of 152 eV for manganese K(α) with a spatial resolution of 50 μm over an imaging area of 12.7 × 12.7 mm(2). It is sensitive to photons in the energy region between 3 and 40 keV, limited by a 50 μm beryllium window, and the sensitive thickness of 450 μm of the chip. Online preview of the sample is possible as the software updates the sums of the counts for certain energy channel ranges during the measurement and displays 2-D false-color maps as well as spectra of selected regions. The complete data cube of 264 × 264 spectra is saved for further qualitative and quantitative processing.

  14. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  15. Investigation on a small FoV gamma camera based on LaBr 3:Ce continuous crystal

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Bennati, P.; Cinti, M. N.; Vittorini, F.; Scafè, R.; Lo Meo, S.; Navarria, F. L.; Moschini, G.; Orsolini Cencelli, V.; De Notaristefani, F.

    2009-12-01

    Recently scintillating crystals with high light yield coupled to photodetectors with high quantum efficiency have been opening a new way to make gamma cameras with superior performances based on continuous crystals. In this work we propose the analysis of a gamma camera based on a continuous LaBr3:Ce crystal coupled to a multi-anodes photomultiplier tube (MA-PMT). In particular we take into account four detector configurations, different in crystal thicknesses and assembling. We utilize a new position algorithm to reduce the position non linearity affecting intrinsic spatial resolution of small FoV gamma cameras when standard Anger algorithm is applied. The experimental data are obtained scanning the detectors with 0.4 mm collimated 99 mTc source, at 1.5 mm step. An improvement in position linearity and spatial resolution of about a factor two is obtained with the new algorithm. The best values in terms of spatial resolution were 0.90 mm, 0.95 mm and 1.80 mm for integral assembled, 4.0 mm thick and 10 mm thick LaBr3:Ce crystal respectively.

  16. Positron-rubidium scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.

    1990-01-01

    A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.

  17. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators.

    PubMed

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom's surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ε  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm(2), averaged group variances were  ±3.0 cm(2). The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  18. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators.

    PubMed

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom's surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ε  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm(2), averaged group variances were  ±3.0 cm(2). The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  19. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators

    NASA Astrophysics Data System (ADS)

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom’s surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ɛ  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm2, averaged group variances were  ±3.0 cm2. The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  20. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    PubMed

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  1. On-orbit calibration approach for star cameras based on the iteration method with variable weights.

    PubMed

    Wang, Mi; Cheng, Yufeng; Yang, Bo; Chen, Xiao

    2015-07-20

    To perform efficient on-orbit calibration for star cameras, we developed an attitude-independent calibration approach for global optimization and noise removal by least-square estimation using multiple star images, with which the optimal principal point, focal length, and the high-order focal plane distortion can be obtained in one step in full consideration of the interaction among star camera parameters. To avoid the problem when stars could be misidentified in star images, an iteration method with variable weights is introduced to eliminate the influence of misidentified star pairs. The approach can increase the precision of least-square estimation and use fewer star images. The proposed approach has been well verified to be precise and robust in three experiments. PMID:26367824

  2. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal. PMID:22163620

  3. On-orbit calibration approach for star cameras based on the iteration method with variable weights.

    PubMed

    Wang, Mi; Cheng, Yufeng; Yang, Bo; Chen, Xiao

    2015-07-20

    To perform efficient on-orbit calibration for star cameras, we developed an attitude-independent calibration approach for global optimization and noise removal by least-square estimation using multiple star images, with which the optimal principal point, focal length, and the high-order focal plane distortion can be obtained in one step in full consideration of the interaction among star camera parameters. To avoid the problem when stars could be misidentified in star images, an iteration method with variable weights is introduced to eliminate the influence of misidentified star pairs. The approach can increase the precision of least-square estimation and use fewer star images. The proposed approach has been well verified to be precise and robust in three experiments.

  4. Design of Belief Propagation Based on FPGA for the Multistereo CAFADIS Camera

    PubMed Central

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404

  5. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    PubMed Central

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal. PMID:22163620

  6. Positron annihilation in flight

    NASA Astrophysics Data System (ADS)

    Tudor Jones, Goronwy

    1999-09-01

    In this resource article, an exceptional bubble chamber picture - showing the annihilation of a positron (antielectron e+ ) in flight - is discussed in detail. Several other esoteric phenomena (some not easy to show on their own!) also manifest themselves in this picture - pair creation or the materialization of a high energy photon into an electron-positron pair; the `head-on' collision of a positron with an electron, from which the mass of the positron can be estimated; the Compton Effect ; an example of the emission of electromagnetic radiation (photons) by accelerating charges (bremsstrahlung ).

  7. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  8. Developing a camera-phone-based drug barcode reader and support system.

    PubMed

    Chen, Wen-Chih; Chang, Polun; Chen, Li-Fen

    2006-01-01

    The Food and Drug Administration (FDA) published a rule that require certain human drug and biological product labels to have a linear bar code, to help reduce the number of medication errors. This paper descripts a software support system for care givers and patients to identify the barcode use a camera phone. It is an economical and effective way if we can create software to help people to identify the barcode and show the information they need just on their phone.

  9. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  10. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  11. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  12. Auto-measurement system of aerial camera lens' resolution based on orthogonal linear CCD

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-liang; Zhang, Yu-ye; Ding, Hong-yi

    2010-10-01

    The resolution of aerial camera lens is one of the most important camera's performance indexes. The measurement and calibration of resolution are important test items in in maintenance of camera. The traditional method that is observing resolution panel of collimator rely on human's eyes using microscope and doing some computing. The method is of low efficiency and susceptible to artificial factors. The measurement results are unstable, too. An auto-measurement system of aerial camera lens' resolution, which uses orthogonal linear CCD sensor as the detector to replace reading microscope, is introduced. The system can measure automatically and show result real-timely. In order to measure the smallest diameter of resolution panel which could be identified, two orthogonal linear CCD is laid on the imaging plane of measured lens and four intersection points are formed on the orthogonal linear CCD. A coordinate system is determined by origin point of the linear CCD. And a circle is determined by four intersection points. In order to obtain the circle's radius, firstly, the image of resolution panel is transformed to pulse width of electric signal which is send to computer through amplifying circuit and threshold comparator and counter. Secondly, the smallest circle would be extracted to do measurement. The circle extraction made using of wavelet transform which has character of localization in the domain of time and frequency and has capability of multi-scale analysis. Lastly, according to the solution formula of lens' resolution, we could obtain the resolution of measured lens. The measuring precision on practical measurement is analyzed, and the result indicated that the precision will be improved when using linear CCD instead of reading microscope. Moreover, the improvement of system error is determined by the pixel's size of CCD. With the technique of CCD developed, the pixel's size will smaller, the system error will be reduced greatly too. So the auto

  13. Camera-based stereo laser-tracking system for robot-positioning applications

    NASA Astrophysics Data System (ADS)

    Allen, Charles R.; Mistry, Nilesh

    1993-08-01

    This paper describes the theory behind laser tracking measurement systems (LTMS) and the development of a prototype LTMS system at Newcastle. An assessment is made of the accuracy of positioning achieved by the system in the control of the end-effector position of a Puma 560 robot manipulator using a CCD camera positioning sensor and a hollow cube retro- reflector placed on the robot wrist.

  14. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  15. Fast-rolling shutter compensation based on piecewise quadratic approximation of a camera trajectory

    NASA Astrophysics Data System (ADS)

    Lee, Yun Gu; Kai, Guo

    2014-09-01

    Rolling shutter effect commonly exists in a video camera or a mobile phone equipped with a complementary metal-oxide semiconductor sensor, caused by a row-by-row exposure mechanism. As video resolution in both spatial and temporal domains increases dramatically, removing rolling shutter effect fast and effectively becomes a challenging problem, especially for devices with limited hardware resources. We propose a fast method to compensate rolling shutter effect, which uses a piecewise quadratic function to approximate a camera trajectory. The duration of a quadratic function in each segment is equal to one frame (or half-frame), and each quadratic function is described by an initial velocity and a constant acceleration. The velocity and acceleration of each segment are estimated using only a few global (or semiglobal) motion vectors, which can be simply predicted from fast motion estimation algorithms. Then geometric image distortion at each scanline is inferred from the predicted camera trajectory for compensation. Experimental results on mobile phones with full-HD video demonstrate that our method can not only be implemented in real time, but also achieve satisfactory visual quality.

  16. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  17. Infrared Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  18. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  19. Imaging performance comparison between a LaBr3: Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera.

    PubMed

    Russo, P; Mettivier, G; Pani, R; Pellegrini, R; Cinti, M N; Bennati, P

    2009-04-01

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr3: Ce scintillator continuous crystal (49 x 49 x 5 mm3) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14 x 14 x 1 mm3) with 256 x 256 square pixels and a pitch of 55 microm, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 microm, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported.

  20. On-Line Detection of Defects on Fruit by Machinevision Systems Based on Three-Color-Cameras Systems

    NASA Astrophysics Data System (ADS)

    Xul, Qiaobao; Zou, Xiaobo; Zhao, Jiewen

    How to identify apple stem-ends and calyxes from defects is still a challenging project due to the complexity of the process. It is know that the stem-ends and calyxes could not appear at the same image. Therefore, a contaminated apple distinguishing method is developed in this article. That is, if there are two or more doubtful blobs on an applés image, the apple is contaminated one. There is no complex imaging process and pattern recognition in this method, because it is only need to find how many blobs (including the stem-ends and calyxes) in an applés image. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of external defects. On this system, the fruits placed on rollers are rotating while moving, and each camera which placed on the line grabs 3 images from an apple. After the apple segmented from the black background by multi-thresholds method, defect's segmentation and counting is performed on the applés images. Good separation between normal and contaminated apples was obtained for threecamera system (94.5%), comparing to one-camera system (63.3%), twocamera system (83.7%). The disadvantage of this method is that it could not distinguish different defects types. Defects of apples, such as bruising, scab, fungal growth, and disease, are treated as the same.

  1. Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.

    2013-09-01

    A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.

  2. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  3. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    NASA Astrophysics Data System (ADS)

    Tohme, Michel S.; Qi, Jinyi

    2009-06-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 × 3 line phantom, an ultra-micro resolution phantom and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  4. Iterative Image Reconstruction for Positron Emission Tomography Based on Detector Response Function Estimated from Point Source Measurements

    PubMed Central

    Tohme, Michel S.; Qi, Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can be easily applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2-D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3-by-3 line phantom, an ultra-micro resolution phantom, and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  5. Performance of the Tachyon Time-of-Flight PET Camera

    DOE PAGESBeta

    Peng, Q.; Choong, W. -S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMAmore » NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less

  6. Performance of the Tachyon Time-of-Flight PET Camera

    SciTech Connect

    Peng, Q.; Choong, W. -S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  7. Performance of the Tachyon Time-of-Flight PET Camera

    PubMed Central

    Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057

  8. Demonstration of First 9 Micron cutoff 640 x 486 GaAs Based Quantum Well Infrared PhotoDetector (QWIP) Snap-Shot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Maker, P. D.; Muller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long waelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NEAT, uniformity, and operability.

  9. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  10. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  11. Nikon Camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Nikon FM compact has simplification feature derived from cameras designed for easy, yet accurate use in a weightless environment. Innovation is a plastic-cushioned advance lever which advances the film and simultaneously switches on a built in light meter. With a turn of the lens aperture ring, a glowing signal in viewfinder confirms correct exposure.

  12. Evaluation of a dual-panel PET camera design to breast cancer imaging.

    PubMed

    Zhang, Jin; Chinn, Gary; Foudray, Angela M K; Habte, Frezghi; Olcott, Peter; Levin, Craig S

    2006-01-01

    We are developing a novel, portable dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging. With a sensitive area of approximately 150 cm(2), this camera is based on arrays of lutetium oxyorthosilicate (LSO) crystals (1x1x3 mm(3)) coupled to 11x11-mm(2) position-sensitive avalanche photodiodes (PSAPD). GATE open source software was used to perform Monte Carlo simulations to optimize the parameters for the camera design. The noise equivalent counting (NEC) rate, together with the true, scatter, and random counting rates were simulated at different time and energy windows. Focal plane tomography (FPT) was used for visualizing the tumors at different depths between the two detector panels. Attenuation and uniformity corrections were applied to images. PMID:17646005

  13. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  14. Development of neutron Anger-camera detector based on flatpanel PMT

    NASA Astrophysics Data System (ADS)

    Hirota, Katsuya; Satoh, Setsuo; Sakai, Kenji; Shinohara, Takenao; Ikeda, Kazuaki; Mishima, Kenji; Yamada, Satoru; Oku, Takayuki; Suzuki, Jun-ichi; Furusaka, Michihiro; Shimizu, Hirohiko M.

    2006-11-01

    A neutron scintillating detector and its data taking system have been developed for neutron scattering measurement. A 64-channel flatpanel photomultiplier is used for the Anger-camera method. The detection efficiency of γ-ray background is very low in the use of the ZnS/ 6LiF scintillator. The spatial resolution is less than 1 mm. The effective area of this detector is around 25 cm 2, and it is easy to expand it to a larger area with small dead space using the multi-photomultiplier tubes system. The fast DAQ system has 10-bit 100 MHz flash ADCs, FPGA chips and USB2.0 device.

  15. Research on the position estimation of human movement based on camera projection

    NASA Astrophysics Data System (ADS)

    Yi, Zhang; Yuan, Luo; Hu, Huosheng

    2005-06-01

    During the rehabilitation process of the post-stroke patients is conducted, their movements need to be localized and learned so that incorrect movement can be instantly modified or tuned. Therefore, tracking these movement becomes vital and necessary for the rehabilitative course. During human movement tracking, the position estimation of human movement is very important. In this paper, the character of the human movement system is first analyzed. Next, camera and inertial sensor are used to respectively measure the position of human movement, and the Kalman filter algorithm is proposed to fuse the two measurement to get a optimization estimation of the position. In the end, the performance of the method is analyzed.

  16. Microlens assembly error analysis for light field camera based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  17. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  18. Positrons for linear colliders

    SciTech Connect

    Ecklund, S.

    1987-11-01

    The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)

  19. Texas Intense Positron Source (TIPS)

    NASA Astrophysics Data System (ADS)

    O'Kelly, D.

    2003-03-01

    The Texas Intense Positron Source (TIPS) is a state of the art variable energy positron beam under construction at the Nuclear Engineering Teaching Laboratory (NETL). Projected intensities on the order of the order of 10^7 e+/second using ^64Cu as the positron source are expected. Owing to is short half-life (t1/2 12.8 hrs), plans are to produce the ^64Cu isotope on-site using beam port 1 of NETL TRIGA Mark II reactor. Following tungsten moderation, the positrons will be electrostatically focused and accelerated from few 10's of eV up to 30 keV. This intensity and energy range should allow routine performance of several analytical techniques of interest to surface scientists (PALS, PADB and perhaps PAES and LEPD.) The TIPS project is being developed in parallel phases. Phase I of the project entails construction of the vacuum system, source chamber, main beam line, electrostatic/magnetic focusing and transport system as well as moderator design. Initial construction, testing and characterization of moderator and beam transport elements are underway and will use a commercially available 10 mCi ^22Na radioisotope as a source of positrons. Phase II of the project is concerned primarily with the Cu source geometry and thermal properties as well as production and physical handling of the radioisotope. Additional instrument optimizing based upon experience gained during Phase I will be incorporated in the final design. Current progress of both phases will be presented along with motivations and future directions.

  20. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    PubMed

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore. PMID:27410133

  1. Visual odometry based on structural matching of local invariant features using stereo camera sensor.

    PubMed

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields.

  2. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  3. Visual odometry based on structural matching of local invariant features using stereo camera sensor.

    PubMed

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  4. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  5. Green up onset in the northern high latitude as observed from satellite data and the ground-based camera networks

    NASA Astrophysics Data System (ADS)

    Kobayashi, H.; Nagai, S.; Aoyama, K.; Kumar, A.; Ichii, K.

    2013-12-01

    The climate of the northern high latitude has warmed in the last three decade [Serreze et al., 2000]. However, it is not well understood how the terrestrial ecosystems respond to such abrupt climate changes whether it can cause the increase or decrease in ecosystem productivity. The prolonged drought summer can cause the decrease in the photosynthetic activity [Goetz et al., 2005]. On the other hand, the extended growing season can cause the increase in the plant productivity. The accurate monitoring of the green up onset is, thus, one of the important variables to understand the current and future terrestrial ecosystem response to the warming climate. The decadal changes of green up onset have been estimated by the NOAA/AVHRR normalized difference vegetation index (NDVI) data sets by applying a universal threshold. With the Terra-MODIS and SPOT-VEGETATION data sets, several improved green up onset estimation approaches have been proposed. The digital camera data sets deployed all over the many different ecosystems and regions have enabled to evaluate the satellite green up onset data sets. In this study, we estimated the green up onset using SPOT/VEGETATION data from 1998 to 2011 in northern high latitude. The MODIS phenology products (MCD12Q2) are also obtained. We compared the estimated green up onset results with the ground-based camera data sets. The comparisons show that the green up onset of MODIS phenology was earlier than the ground-based digital camera data sets. Reference Goetz, A. J., A. G. Bunn, G. J. Fiske, and R. A. Houghton (2006), Satellite-observed photosynthetic trends across boreal North America associated with climate and fire disturbance, Proc. Natl. Acad. Sci., 102, 13521-13525. Serreze, M. C., Walsh, J. E., Chapin, F. S., III, Osterkamp, T., Dyurgerov, M., Romanovsky, V., Oechel, W. C., Morison, J., Zhang, T., and Barry, R. G.: 2000, ';Observational evidence of recent change in the northern high latitude environment', Clim. Change 46, 159-207.

  6. The new SCOS-based EGSE of the EPIC flight-spare on-ground cameras

    NASA Astrophysics Data System (ADS)

    La Palombara, Nicola; Abbey, Anthony; Insinga, Fernando; Calderon-Riano, Pedro; Casale, Mauro; Kirsch, Marcus; Martin, James; Munoz, Ramon; Palazzo, Maddalena; Poletti, Mauro; Sembay, Steve; Vallejo, Juan C.; Villa, Gabriele

    2014-07-01

    The XMM-Newton observatory, launched by the European Space Agency in 1999, is still one of the scientific community's most important high-energy astrophysics missions. After almost 15 years in orbit its instruments continue to operate smoothly with a performance close to the immediate post-launch status. The competition for the observing time remains very high with ESA reporting a very healthy over-subscription factor. Due to the efficient use of spacecraft consumables XMM-Newton could potentially be operated into the next decade. However, since the mission was originally planned for 10 years, progressive ageing and/or failures of the on-board instrumentation can be expected. Dealing with them could require substantial changes of the on-board operating software, and of the command and telemetry database, which could potentially have unforeseen consequences for the on-board equipment. In order to avoid this risk, it is essential to test these changes on ground, before their upload. To this aim, two flight-spare cameras of the EPIC experiment (one MOS and one PN) are available on-ground. Originally they were operated through an Electrical Ground Support Equipment (EGSE) system which was developed over 15 years ago to support the test campaigns up to the launch. The EGSE used a specialized command language running on now obsolete workstations. ESA and the EPIC Consortium, therefore, decided to replace it with new equipment in order to fully reproduce on-ground the on-board configuration and to operate the cameras with SCOS2000, the same Mission Control System used by ESA to control the spacecraft. This was a demanding task, since it required both the recovery of the detailed knowledge of the original EGSE and the adjustment of SCOS for this special use. Recently this work has been completed by replacing the EGSE of one of the two cameras, which is now ready to be used by ESA. Here we describe the scope and purpose of this activity, the problems faced during its

  7. Assimilation of PFISR Data Using Support Vector Regression and Ground Based Camera Constraints

    NASA Astrophysics Data System (ADS)

    Clayton, R.; Lynch, K. A.; Nicolls, M. J.; Hampton, D. L.; Michell, R.; Samara, M.; Guinther, J.

    2013-12-01

    In order to best interpret the information gained from multipoint in situ measurements, a Support Vector Regression algorithm is being developed to interpret the data collected from the instruments in the context of ground observations (such as those from camera or radar array). The idea behind SVR is to construct the simplest function that models the data with the least squared error, subject to constraints given by the user. Constraints can be brought into the algorithm from other data sources or from models. As is often the case with data, a perfect solution to such a problem may be impossible, thus 'slack' may be introduced to control how closely the model adheres to the data. The algorithm employs kernels, and chooses radial basis functions as an appropriate kernel. The current SVR code can take input data as one to three dimensional scalars or vectors, and may also include time. External data can be incorporated and assimilated into a model of the environment. Regions of minimal and maximal values are allowed to relax to the sample average (or a user-supplied model) on size and time scales determined by user input, known as feature sizes. These feature sizes can vary for each degree of freedom if the user desires. The user may also select weights for each data point, if it is desirable to weight parts of the data differently. In order to test the algorithm, Poker Flat Incoherent Scatter Radar (PFISR) and MICA sounding rocket data are being used as sample data. The PFISR data consists of many beams, each with multiple ranges. In addition to analyzing the radar data as it stands, the algorithm is being used to simulate data from a localized ionospheric swarm of Cubesats using existing PFISR data. The sample points of the radar at one altitude slice can serve as surrogates for satellites in a cubeswarm. The number of beams of the PFISR radar can then be used to see what the algorithm would output for a swarm of similar size. By using PFISR data in the 15-beam to

  8. Camera-based measurement for transverse vibrations of moving catenaries in mine hoists using digital image processing techniques

    NASA Astrophysics Data System (ADS)

    Yao, Jiannan; Xiao, Xingming; Liu, Yao

    2016-03-01

    This paper proposes a novel, non-contact, sensing method to measure the transverse vibrations of hoisting catenaries in mine hoists. Hoisting catenaries are typically moving cables and it is not feasible to use traditional methods to measure their transverse vibrations. In order to obtain the transverse displacements of an arbitrary point in a moving catenary, by superposing a mask image having the predefined reference line perpendicular to the hoisting catenaries on each frame of the processed image sequence, the dynamic intersecting points with a grey value of 0 in the image sequence could be identified. Subsequently, by traversing the coordinates of the pixel with a grey value of 0 and calculating the distance between the identified dynamic points from the reference, the transverse displacements of the selected arbitrary point in the hoisting catenary can be obtained. Furthermore, based on a theoretical model, the reasonability and applicability of the proposed camera-based method were confirmed. Additionally, a laboratory experiment was also carried out, which then validated the accuracy of the proposed method. The research results indicate that the proposed camera-based method is suitable for the measurement of the transverse vibrations of moving cables.

  9. Beyond leaf color: Comparing camera-based phenological metrics with leaf biochemical, biophysical, and spectral properties throughout the growing season of a temperate deciduous forest

    NASA Astrophysics Data System (ADS)

    Yang, Xi; Tang, Jianwu; Mustard, John F.

    2014-03-01

    Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

  10. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  11. Performance Evaluation of a Microchannel Plate based X-ray Camera with a Reflecting Grid

    NASA Astrophysics Data System (ADS)

    Visco, A.; Drake, R. P.; Harding, E. C.; Rathore, G. K.

    2006-10-01

    Microchannel Plates (MCPs) are used in a variety of imaging systems as a means of amplifying the incident radiation. Using a microchannel plate mount recently developed at the University of Michigan, the effects of a metal reflecting grid are explored. Employing the reflecting grid, we create a potential difference above the MCP input surface that forces ejected electrons back into the pores, which may prove to increase the quantum efficiency of the camera. We investigate the changes in the pulse height distribution, modular transfer function, and Quantum efficiency of MCPs caused by the introduction of the reflecting grid. Work supported by the Naval Research Laboratory, National Nuclear Security Administration under the Stewardship Science Academic Alliances program through DOE Research Grant DE-FG52-03NA00064, and through DE FG53 2005 NA26014, and Livermore National Laboratory.

  12. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%.

  13. Fall detection based on body part tracking using a depth camera.

    PubMed

    Bian, Zhen-Peng; Hou, Junhui; Chau, Lap-Pui; Magnenat-Thalmann, Nadia

    2015-03-01

    The elderly population is increasing rapidly all over the world. One major risk for elderly people is fall accidents, especially for those living alone. In this paper, we propose a robust fall detection approach by analyzing the tracked key joints of the human body using a single depth camera. Compared to the rivals that rely on the RGB inputs, the proposed scheme is independent of illumination of the lights and can work even in a dark room. In our scheme, a pose-invariant randomized decision tree algorithm is proposed for the key joint extraction, which requires low computational cost during the training and test. Then, the support vector machine classifier is employed to determine whether a fall motion occurs, whose input is the 3-D trajectory of the head joint. The experimental results demonstrate that the proposed fall detection method is more accurate and robust compared with the state-of-the-art methods.

  14. CHARACTERIZATION OF PLASTICALLY-INDUCED STRUCTURAL CHANGES IN A Zr-BASED BULK METALLIC GLASS USING POSITRON ANNIHILATION SPECTROCOPY

    SciTech Connect

    Flores, K M; Kanungo, B P; Glade, S C; Asoka-Kumar, P

    2005-09-16

    Flow in metallic glasses is associated with stress-induced cooperative rearrangements of small groups of atoms involving the surrounding free volume. Understanding the details of these rearrangements therefore requires knowledge of the amount and distribution of the free volume and how that distribution evolves with deformation. The present study employs positron annihilation spectroscopy to investigate the free volume change in Zr{sub 58.5}Cu{sub 15.6}Ni{sub 12.8}Al{sub 10.3}Nb{sub 2.8} bulk metallic glass after inhomogeneous plastic deformation by cold rolling and structural relaxation by annealing. Results indicate that the size distribution of open volume sites is at least bimodal. The size and concentration of the larger group, identified as flow defects, changes with processing. Following initial plastic deformation the size of the flow defects increases, consistent with the free volume theory for flow. Following more extensive deformation, however, the size distribution of the positron traps shifts, with much larger open volume sites forming at the expense of the flow defects. This suggests that a critical strain is required for flow defects to coalesce and form more stable nanovoids, which have been observed elsewhere by high resolution TEM. Although these results suggest the presence of three distinct open volume size groups, further analysis indicates that all groups have the same line shape parameter. This is in contrast to the distinctly different interactions observed in crystalline materials with multiple defect types. This similarity may be due to the disordered structure of the glass and positron affinity to particular atoms surrounding open-volume regions.

  15. Quantitative Fluorescence Assays Using a Self-Powered Paper-Based Microfluidic Device and a Camera-Equipped Cellular Phone

    PubMed Central

    Thom, Nicole K.; Lewis, Gregory G.; Yeung, Kimy

    2014-01-01

    Fluorescence assays often require specialized equipment and, therefore, are not easily implemented in resource-limited environments. Herein we describe a point-of-care assay strategy in which fluorescence in the visible region is used as a readout, while a camera-equipped cellular phone is used to capture the fluorescent response and quantify the assay. The fluorescence assay is made possible using a paper-based microfluidic device that contains an internal fluidic battery, a surface-mount LED, a 2-mm section of a clear straw as a cuvette, and an appropriately-designed small molecule reagent that transforms from weakly fluorescent to highly fluorescent when exposed to a specific enzyme biomarker. The resulting visible fluorescence is digitized by photographing the assay region using a camera-equipped cellular phone. The digital images are then quantified using image processing software to provide sensitive as well as quantitative results. In a model 30 min assay, the enzyme β-D-galactosidase was measured quantitatively down to 700 pM levels. This Communication describes the design of these types of assays in paper-based microfluidic devices and characterizes the key parameters that affect the sensitivity and reproducibility of the technique. PMID:24490035

  16. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy

    NASA Astrophysics Data System (ADS)

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-01

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  17. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    SciTech Connect

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-11-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated

  18. Contrast-enhanced [18 F] fluorodeoxyglucose-positron emission tomography/computed tomography in clinical oncology: tumor-, site-, and question-based comparison with standard positron emission tomography/computed tomography

    PubMed Central

    2014-01-01

    Background The present study aimed to evaluate the added value of contrast-enhanced computed tomography (ceCT) in comparison to standard, non-enhanced CT in the context of a combined positron emission tomography (PET)/CT examination by means of a tumor-, site-, and clinical question-based approach. Methods Analysis was performed in 202 patients undergoing PET/CT consisting of a multiphase CT protocol followed by a whole-body PET. The Cochran Q test was performed, followed by a multiple comparisons correction (McNemar test and Bonferroni adjustment), to compare standard and contrast-enhanced PET (cePET/CT). Histopathology or clinical-radiologic follow-up greater than 1 year was used as a reference. Results cePET/CT showed significantly different results with respect to standard PET/CT in head and neck and gastrointestinal cancer (P = 0.02 and 0.0002, respectively), in the evaluation of lesions located in the abdomen (P = 0.009), and in the context of disease restaging (P = 0.003). In all these clinical scenarios, adding ceCT resulted in a distinct benefit, by yielding a higher percentage of change in patient management. Conclusion These data strongly underline the importance of strictly selecting patients for the combined exam. In particular, patient selection should not be driven solely by mere tumor classification, but should also account for the clinical question and the anatomical location of the neoplastic disease, which can significantly impact patient management. PMID:25609564

  19. Individualized Positron Emission Tomography–Based Isotoxic Accelerated Radiation Therapy Is Cost-Effective Compared With Conventional Radiation Therapy: A Model-Based Evaluation

    SciTech Connect

    Bongers, Mathilda L.; Coupé, Veerle M.H.; De Ruysscher, Dirk; Oberije, Cary; Lambin, Philippe; Uyl-de Groot, Cornelia A.

    2015-03-15

    Purpose: To evaluate long-term health effects, costs, and cost-effectiveness of positron emission tomography (PET)-based isotoxic accelerated radiation therapy treatment (PET-ART) compared with conventional fixed-dose CT-based radiation therapy treatment (CRT) in non-small cell lung cancer (NSCLC). Methods and Materials: Our analysis uses a validated decision model, based on data of 200 NSCLC patients with inoperable stage I-IIIB. Clinical outcomes, resource use, costs, and utilities were obtained from the Maastro Clinic and the literature. Primary model outcomes were the difference in life-years (LYs), quality-adjusted life-years (QALYs), costs, and the incremental cost-effectiveness and cost/utility ratio (ICER and ICUR) of PET-ART versus CRT. Model outcomes were obtained from averaging the predictions for 50,000 simulated patients. A probabilistic sensitivity analysis and scenario analyses were carried out. Results: The average incremental costs per patient of PET-ART were €569 (95% confidence interval [CI] €−5327-€6936) for 0.42 incremental LYs (95% CI 0.19-0.61) and 0.33 QALYs gained (95% CI 0.13-0.49). The base-case scenario resulted in an ICER of €1360 per LY gained and an ICUR of €1744 per QALY gained. The probabilistic analysis gave a 36% probability that PET-ART improves health outcomes at reduced costs and a 64% probability that PET-ART is more effective at slightly higher costs. Conclusion: On the basis of the available data, individualized PET-ART for NSCLC seems to be cost-effective compared with CRT.

  20. Imaging performance comparison between a LaBr{sub 3}:Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera

    SciTech Connect

    Russo, P.; Mettivier, G.; Pani, R.; Pellegrini, R.; Cinti, M. N.; Bennati, P.

    2009-04-15

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr{sub 3}:Ce scintillator continuous crystal (49x49x5 mm{sup 3}) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14x14x1 mm{sup 3}) with 256x256 square pixels and a pitch of 55 {mu}m, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 {mu}m, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported.

  1. A method for automatic 3D reconstruction based on multiple views from a free-mobile camera

    NASA Astrophysics Data System (ADS)

    Yu, Qingbing; Zhang, Zhijiang

    2004-09-01

    Automatic 3D-reconstruction from an image sequence of an object is described. The construction is based on multiple views from a free-mobile camera and the object is placed on a novel calibration pattern consisting of two concentric circles connected by radial line segments. Compared to other methods of 3D-reconstruction, the approach reduces the restriction of the measurement environment and increases the flexibility of the user. In the first step, the images of each view are calibrated individually to obtain camera information. The calibration pattern is separated from the input image with the erosion-dilation algorithm and the calibration points can be extracted from the pattern image accurately after estimations of two ellipses and lines. Tsai"s two-stage technique is used in calibration process. In the second step, the 3D reconstruction of real object can be subdivided into two parts: the shape reconstruction and texture mapping. With the principle of "shape from silhouettes (SFS)", a bounding cone is constructed from one image using the calibration information and silhouette. The intersection of all bounding cones defines an approximate geometric representation. The experimental results with real object are performed, the reconstruction error <1%, which validate this method"s high efficiency and feasibility.

  2. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  3. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens)

    NASA Astrophysics Data System (ADS)

    Robert, Charlotte; Montémont, Guillaume; Rebuffel, Véronique; Buvat, Irène; Guérin, Lucie; Verger, Loïck

    2010-05-01

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  4. High-speed real-time 3-D coordinates measurement based on fringe projection profilometry considering camera lens distortion

    NASA Astrophysics Data System (ADS)

    Feng, Shijie; Chen, Qian; Zuo, Chao; Sun, Jiasong; Yu, Shi Ling

    2014-10-01

    Optical three-dimensional (3-D) profilometry is gaining increasing attention for its simplicity, flexibility, high accuracy, and non-contact nature. Recent advances in imaging sensors and digital projection technology further its progress in high-speed, real-time applications, enabling 3-D shapes reconstruction of moving objects and dynamic scenes. However, the camera lens is never perfect and the lens distortion does influence the accuracy of the measurement result, which is often overlooked in the existing real-time 3-D shape measurement systems. To this end, here we present a novel high-speed real-time 3-D coordinates measuring technique based on fringe projection with the consideration of the camera lens distortion. A pixel mapping relation between a distorted image and a corrected one is pre-determined and stored in computer memory for real-time fringe correction. The out-of-plane height is obtained firstly and the acquisition for the two corresponding in-plane coordinates follows on the basis of the solved height. Besides, a method of lookup table (LUT) is introduced as well for fast data processing. Our experimental results reveal that the measurement error of the in-plane coordinates has been reduced by one order of magnitude and the accuracy of the out-plane coordinate been tripled after the distortions being eliminated. Moreover, owing to the generated LUTs, a 3-D reconstruction speed of 92.34 frames per second can be achieved.

  5. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future. PMID:20210459

  6. A Trajectory and Orientation Reconstruction Method for Moving Objects Based on a Moving Monocular Camera

    PubMed Central

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-01-01

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053

  7. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  8. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    PubMed

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  9. A smart camera based traffic enforcement system: experiences from the field

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2013-03-01

    The observation and monitoring of traffic with smart vision systems for the purpose of improving traffic safety has a big potential. Embedded vision systems can count vehicles and estimate the state of traffic along the road, they can supplement or replace loop sensors with their limited local scope, radar which measures the speed, presence and number of vehicles. This work presents a vision system which has been built to detect and report traffic rule violations at unsecured railway crossings which pose a threat to drivers day and night. Our system is designed to detect and record vehicles passing over the railway crossing after the red light has been activated. Sparse optical flow in conjunction with motion clustering is used for real-time motion detection in order to capture these safety critical events. The cameras are activated by an electrical signal from the railway when the red light turns on. If they detect a vehicle moving over the stopping line, and it is well over this limit, an image sequence will be recorded and stored onboard for later evaluation. The system has been designed to be operational in all weather conditions, delivering human-readable license plate images even under the worst illumination conditions like direct incident sunlight direct view into or vehicle headlights. After several months of operation in the field we can report on the performance of the system, its hardware implementation as well as the implementation of algorithms which have proven to be usable in this real-world application.

  10. A global station coordinate solution based upon camera and laser data - GSFC 1973

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Klosko, S. M.

    1973-01-01

    Results for the geocentric coordinates of 72 globally distributed satellite tracking stations consisting of 58 cameras and 14 lasers are presented. The observational data for this solution consists of over 65,000 optical observations and more than 350 laser passes recorded during the National Geodetic Satellite Program, the 1968 Centre National d'Etudes Spatiales/Smithsonian Astrophysical Observatory (SAO) Program, and International Satellite Geodesy Experiment Program. Dynamic methods were used. The data were analyzed with the GSFC GEM and SAO 1969 Standard Earth Gravity Models. The recent value of GM = 398600.8 cu km/sec square derived at the Jet Propulsion Laboratory (JPL) gave the best results for this combination laser/optical solution. Solutions are made with the deep space solution of JPL (LS-25 solution) including results obtained at GSFC from Mariner-9 Unified B-Band tracking. Datum transformation parameters relating North America, Europe, South America, and Australia are given, enabling the positions of some 200 other tracking stations to be placed in the geocentric system.

  11. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  12. Lightweight camera head for robotic-based binocular stereo vision: an integrated engineering approach

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Parker, Graham A.

    1992-03-01

    This paper presents the design and development of a real-time eye-in-hand stereo-vision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end- effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small size envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation, and component inspection for the manufacturing industry.

  13. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  14. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  15. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  16. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  17. The CAMCAO infrared camera

    NASA Astrophysics Data System (ADS)

    Amorim, Antonio; Melo, Antonio; Alves, Joao; Rebordao, Jose; Pinhao, Jose; Bonfait, Gregoire; Lima, Jorge; Barros, Rui; Fernandes, Rui; Catarino, Isabel; Carvalho, Marta; Marques, Rui; Poncet, Jean-Marc; Duarte Santos, Filipe; Finger, Gert; Hubin, Norbert; Huster, Gotthard; Koch, Franz; Lizon, Jean-Louis; Marchetti, Enrico

    2004-09-01

    The CAMCAO instrument is a high resolution near infrared (NIR) camera conceived to operate together with the new ESO Multi-conjugate Adaptive optics Demonstrator (MAD) with the goal of evaluating the feasibility of Multi-Conjugate Adaptive Optics techniques (MCAO) on the sky. It is a high-resolution wide field of view (FoV) camera that is optimized to use the extended correction of the atmospheric turbulence provided by MCAO. While the first purpose of this camera is the sky observation, in the MAD setup, to validate the MCAO technology, in a second phase, the CAMCAO camera is planned to attach directly to the VLT for scientific astrophysical studies. The camera is based on the 2kx2k HAWAII2 infrared detector controlled by an ESO external IRACE system and includes standard IR band filters mounted on a positional filter wheel. The CAMCAO design requires that the optical components and the IR detector should be kept at low temperatures in order to avoid emitting radiation and lower detector noise in the region analysis. The cryogenic system inclues a LN2 tank and a sptially developed pulse tube cryocooler. Field and pupil cold stops are implemented to reduce the infrared background and the stray-light. The CAMCAO optics provide diffraction limited performance down to J Band, but the detector sampling fulfills the Nyquist criterion for the K band (2.2mm).

  18. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  19. Positron annihilation studies of organic superconductivity

    SciTech Connect

    Yen, H.L.; Lou, Y.; Ali, E.H.

    1994-09-01

    The positron lifetimes of two organic superconductors, {kappa}-(ET){sub 2}Cu(NCS){sub 2} and {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br, are measured as a function of temperature across {Tc}. A drop of positron lifetime below {Tc} is observed. Positron-electron momentum densities are measured by using 2D-ACAR to search for the Fermi surface in {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br. Positron density distributions and positron-electron overlaps are calculated by using the orthogonalized linear combination atomic orbital (OLCAO) method to interprete the temperature dependence due to the local charge transfer which is inferred to relate to the superconducting transition. 2D-ACAR results in {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br are compared with theoretical band calculations based on a first-principles local density approximation. Importance of performing accurate band calculations for the interpretation of positron annihilation data is emphasized.

  20. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

    PubMed Central

    Westlund, Jacqueline Kory; D’Mello, Sidney K.; Olney, Andrew M.

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  1. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  2. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  3. Geologic map of the northern hemisphere of Vesta based on Dawn Framing Camera (FC) images

    NASA Astrophysics Data System (ADS)

    Ruesch, Ottaviano; Hiesinger, Harald; Blewett, David T.; Williams, David A.; Buczkowski, Debra; Scully, Jennifer; Yingst, R. Aileen; Roatsch, Thomas; Preusker, Frank; Jaumann, Ralf; Russell, Christopher T.; Raymond, Carol A.

    2014-12-01

    The Dawn Framing Camera (FC) has imaged the northern hemisphere of the Asteroid (4) Vesta at high spatial resolution and coverage. This study represents the first investigation of the overall geology of the northern hemisphere (22-90°N, quadrangles Av-1, 2, 3, 4 and 5) using these unique Dawn mission observations. We have compiled a morphologic map and performed crater size-frequency distribution (CSFD) measurements to date the geologic units. The hemisphere is characterized by a heavily cratered surface with a few highly subdued basins up to ∼200 km in diameter. The most widespread unit is a plateau (cratered highland unit), similar to, although of lower elevation than the equatorial Vestalia Terra plateau. Large-scale troughs and ridges have regionally affected the surface. Between ∼180°E and ∼270°E, these tectonic features are well developed and related to the south pole Veneneia impact (Saturnalia Fossae trough unit), elsewhere on the hemisphere they are rare and subdued (Saturnalia Fossae cratered unit). In these pre-Rheasilvia units we observed an unexpectedly high frequency of impact craters up to ∼10 km in diameter, whose formation could in part be related to the Rheasilvia basin-forming event. The Rheasilvia impact has potentially affected the northern hemisphere also with S-N small-scale lineations, but without covering it with an ejecta blanket. Post-Rheasilvia impact craters are small (<60 km in diameter) and show a wide range of degradation states due to impact gardening and mass wasting processes. Where fresh, they display an ejecta blanket, bright rays and slope movements on walls. In places, crater rims have dark material ejecta and some crater floors are covered by ponded material interpreted as impact melt.

  4. Are we ready for positron emission tomography/computed tomography-based target volume definition in lymphoma radiation therapy?

    PubMed

    Yeoh, Kheng-Wei; Mikhaeel, N George

    2013-01-01

    Fluorine-18 fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) has become indispensable for the clinical management of lymphomas. With consistent evidence that it is more accurate than anatomic imaging in the staging and response assessment of many lymphoma subtypes, its utility continues to increase. There have therefore been efforts to incorporate PET/CT data into radiation therapy decision making and in the planning process. Further, there have also been studies investigating target volume definition for radiation therapy using PET/CT data. This article will critically review the literature and ongoing studies on the above topics, examining the value and methods of adding PET/CT data to the radiation therapy treatment algorithm. We will also discuss the various challenges and the areas where more evidence is required.

  5. Are We Ready for Positron Emission Tomography/Computed Tomography-based Target Volume Definition in Lymphoma Radiation Therapy?

    SciTech Connect

    Yeoh, Kheng-Wei; Mikhaeel, N. George

    2013-01-01

    Fluorine-18 fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) has become indispensable for the clinical management of lymphomas. With consistent evidence that it is more accurate than anatomic imaging in the staging and response assessment of many lymphoma subtypes, its utility continues to increase. There have therefore been efforts to incorporate PET/CT data into radiation therapy decision making and in the planning process. Further, there have also been studies investigating target volume definition for radiation therapy using PET/CT data. This article will critically review the literature and ongoing studies on the above topics, examining the value and methods of adding PET/CT data to the radiation therapy treatment algorithm. We will also discuss the various challenges and the areas where more evidence is required.

  6. Assessment of Tumor Volumes in Skull Base Glomus Tumors Using Gluc-Lys[{sup 18}F]-TOCA Positron Emission Tomography

    SciTech Connect

    Astner, Sabrina T.; Bundschuh, Ralph A.; Beer, Ambros J.; Ziegler, Sibylle I.; Krause, Bernd J.; Schwaiger, Markus; Molls, Michael; Grosu, Anca L.; Essler, Markus

    2009-03-15

    Purpose: To assess a threshold for Gluc-Lys[{sup 18}F]-TOCA positron emission tomography (PET) in target volume delineation of glomus tumors in the skull base and to compare with MRI-based target volume delineation. Methods and Materials: The threshold for volume segmentation in the PET images was determined by a phantom study. Nine patients with a total of 11 glomus tumors underwent PET either with Gluc-Lys[{sup 18}F]-TOCA or with {sup 68}Ga-DOTATOC (in 1 case). All patients were additionally scanned by MRI. Positron emission tomography and MR images were transferred to a treatment-planning system; MR images were analyzed for lesion volume by two observers, and PET images were analyzed by a semiautomated thresholding algorithm. Results: Our phantom study revealed that 32% of the maximum standardized uptake value is an appropriate threshold for tumor segmentation in PET-based target volume delineation of gross tumors. Target volume delineation by MRI was characterized by high interobserver variability. In contrast, interobserver variability was minimal if fused PET/MRI images were used. The gross tumor volumes (GTVs) determined by PET (GTV-PET) showed a statistically significant correlation with the GTVs determined by MRI (GTV-MRI) in primary tumors; in recurrent tumors higher differences were found. The mean GTV-MRI was significantly higher than mean GTV-PET. The increase added by MRI to the common volume was due to scar tissue with strong signal enhancement on MRI. Conclusions: In patients with glomus tumors, Gluc-Lys[{sup 18}F]-TOCA PET helps to reduce interobserver variability if an appropriate threshold for tumor segmentation has been determined for institutional conditions. Especially in patients with recurrent tumors after surgery, Gluc-Lys[{sup 18}F]-TOCA PET improves the accuracy of GTV delineation.

  7. Online rate control in digital cameras for near-constant distortion based on minimum/maximum criterion

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Yong; Ortega, Antonio

    2000-04-01

    We address the problem of online rate control in digital cameras, where the goal is to achieve near-constant distortion for each image. Digital cameras usually have a pre-determined number of images that can be stored for the given memory size and require limited time delay and constant quality for each image. Due to time delay restrictions, each image should be stored before the next image is received. Therefore, we need to define an online rate control that is based on the amount of memory used by previously stored images, the current image, and the estimated rate of future images. In this paper, we propose an algorithm for online rate control, in which an adaptive reference, a 'buffer-like' constraint, and a minimax criterion (as a distortion metric to achieve near-constant quality) are used. The adaptive reference is used to estimate future images and the 'buffer-like' constraint is required to keep enough memory for future images. We show that using our algorithm to select online bit allocation for each image in a randomly given set of images provides near constant quality. Also, we show that our result is near optimal when a minimax criterion is used, i.e., it achieves a performance close to that obtained by applying an off-line rate control that assumes exact knowledge of the images. Suboptimal behavior is only observed in situations where the distribution of images is not truly random (e.g., if most of the 'complex' images are captured at the end of the sequence.) Finally, we propose a T- step delay rate control algorithm and using the result of 1- step delay rate control algorithm, we show that this algorithm removes the suboptimal behavior.

  8. Alternative positron-target design for electron-positron colliders

    SciTech Connect

    Donahue, R.J. ); Nelson, W.R. )

    1991-04-01

    Current electron-positron linear colliders are limited in luminosity by the number of positrons which can be generated from targets presently used. This paper examines the possibility of using an alternate wire-target geometry for the production of positrons via an electron-induced electromagnetic cascade shower. 39 refs., 38 figs., 5 tabs.

  9. Caught on Video! Using Handheld Digital Video Cameras to Support Evidence-Based Reasoning

    ERIC Educational Resources Information Center

    Lottero-Perdue, Pamela S.; Nealy, Jennifer; Roland, Christine; Ryan, Amy

    2011-01-01

    Engaging elementary students in evidence-based reasoning is an essential aspect of science and engineering education. Evidence-based reasoning involves students making claims (i.e., answers to questions, or solutions to problems), providing evidence to support those claims, and articulating their reasoning to connect the evidence to the claim. In…

  10. Recent advances in digital camera optics

    NASA Astrophysics Data System (ADS)

    Ishiguro, Keizo

    2012-10-01

    The digital camera market has extremely expanded in the last ten years. The zoom lens for digital camera is especially the key determining factor of the camera body size and image quality. Its technologies have been based on several analog technological progresses including the method of aspherical lens manufacturing and the mechanism of image stabilization. Panasonic is one of the pioneers of both technologies. I will introduce the previous trend in optics of zoom lens as well as original optical technologies of Panasonic digital camera "LUMIX", and in addition optics in 3D camera system. Besides, I would like to suppose the future trend in digital cameras.

  11. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  12. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  13. Recent Developments in the Design of the NLC Positron Source

    NASA Astrophysics Data System (ADS)

    Kotseroglou, T.; Bharadwaj, V.; Clendenin, J. E.; Ecklund, S.; Frisch, J.

    1999-11-01

    Recent developments in the design of the Next Linear Collider (NLC) positron source based on updated beam parameters are described. The unpolarized NLC positron source (1,2) consists of a dedicated 6.2 GeV S-band electron accelerator, a high-Z positron production target, a capture system and an L-band positron linac. The 1998 failure of the SLC target, which is currently under investigation, may lead to a variation of the target design. Progress towards a polarized positron source is also presented. A moderately polarized positron beam colliding with a highly polarized electron beam results in an effective polarization large enough to explore new physics at NLC. One of the schemes towards a polarized positron source incorporates a polarized electron source, a 50 MeV electron accelerator, a thin target for positron production and a new capture system optimized for high-energy, small angular-divergence positrons. The yield for such a process, checked using the EGS4 code, is of the order of 10-3. The EGS4 code has being enhanced to include the effect of polarization in bremsstrahlung and pair-production process.

  14. Production of a positron microprobe using a transmission remoderator.

    PubMed

    Fujinami, Masanori; Jinno, Satoshi; Fukuzumi, Masafumi; Kawaguchi, Takumi; Oguma, Koichi; Akahane, Takashi

    2008-01-01

    A production method for a positron microprobe using a beta+-decay radioisotope (22Na) source has been investigated. When a magnetically guided positron beam was extracted from the magnetic field, the combination of an extraction coil and a magnetic lens enabled us to focus the positron beam by a factor of 10 and to achieve a high transport efficiency (71%). A 150-nm-thick Ni(100) thin film was mounted at the focal point of the magnetic lens and was used as a remoderator for brightness enhancement in a transmission geometry. The remoderated positrons were accelerated by an electrostatic lens and focused on the target by an objective magnetic lens. As a result, a 4-mm-diameter positron beam could be transformed into a microprobe of 60 microm or less with 4.2% total efficiency. The S parameter profile obtained by a single-line scan of a test specimen coincided well with the defect distribution. This technique for a positron microprobe is available to an accelerator-based high-intensity positron source and allows 3-dimensional vacancy-type defect analysis and a positron source for a transmission positron microscope.

  15. Moisture determination in composite materials using positron lifetime techniques

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Holt, W. R.; Mock, W., Jr.

    1980-01-01

    A technique was developed which has the potential of providing information on the moisture content as well as its depth in the specimen. This technique was based on the dependence of positron lifetime on the moisture content of the composite specimen. The positron lifetime technique of moisture determination and the results of the initial studies are described.

  16. Multi-PSPMT scintillation camera

    SciTech Connect

    Pani, R.; Pellegrini, R.; Trotta, G.; Scopinaro, F.; Soluri, A.; Vincentis, G. de; Scafe, R.; Pergola, A.

    1999-06-01

    Gamma ray imaging is usually accomplished by the use of a relatively large scintillating crystal coupled to either a number of photomultipliers (PMTs) (Anger Camera) or to a single large Position Sensitive PMT (PSPMT). Recently the development of new diagnostic techniques, such as scintimammography and radio-guided surgery, have highlighted a number of significant limitations of the Anger camera in such imaging procedures. In this paper a dedicated gamma camera is proposed for clinical applications with the aim of improving image quality by utilizing detectors with an appropriate size and shape for the part of the body under examination. This novel scintillation camera is based upon an array of PSPMTs (Hamamatsu R5900-C8). The basic concept of this camera is identical to the Anger Camera with the exception of the substitution of PSPMTs for the PMTs. In this configuration it is possible to use the high resolution of the PSPMTs and still correctly position events lying between PSPMTs. In this work the test configuration is a 2 by 2 array of PSPMTs. Some advantages of this camera are: spatial resolution less than 2 mm FWHM, good linearity, thickness less than 3 cm, light weight, lower cost than equivalent area PSPMT, large detection area when coupled to scintillating arrays, small dead boundary zone (< 3 mm) and flexibility in the shape of the camera.

  17. Positron clouds within thunderstorms

    NASA Astrophysics Data System (ADS)

    Dwyer, Joseph R.; Smith, David M.; Hazelton, Bryna J.; Grefenstette, Brian W.; Kelley, Nicole A.; Lowell, Alexander W.; Schaal, Meagan M.; Rassoul, Hamid K.

    2015-08-01

    We report the observation of two isolated clouds of positrons inside an active thunderstorm. These observations were made by the Airborne Detector for Energetic Lightning Emissions (ADELE), an array of six gamma-ray detectors, which flew on a Gulfstream V jet aircraft through the top of an active thunderstorm in August 2009. ADELE recorded two 511 keV gamma-ray count rate enhancements, 35 s apart, each lasting approximately 0.2 s. The enhancements, which were approximately a factor of 12 above background, were both accompanied by electrical activity as measured by a flat-plate antenna on the underside of the aircraft. The energy spectra were consistent with a source mostly composed of positron annihilation gamma rays, with a prominent 511 keV line clearly visible in the data. Model fits to the data suggest that the aircraft was briefly immersed in clouds of positrons, more than a kilometre across. It is not clear how the positron clouds were created within the thunderstorm, but it is possible they were caused by the presence of the aircraft in the electrified environment.

  18. Positron excitation of neon

    NASA Technical Reports Server (NTRS)

    Parcell, L. A.; Mceachran, R. P.; Stauffer, A. D.

    1990-01-01

    The differential and total cross section for the excitation of the 3s1P10 and 3p1P1 states of neon by positron impact were calculated using a distorted-wave approximation. The results agree well with experimental conclusions.

  19. Camera-based ratiometric fluorescence transduction of nucleic acid hybridization with reagentless signal amplification on a paper-based platform using immobilized quantum dots as donors.

    PubMed

    Noor, M Omair; Krull, Ulrich J

    2014-10-21

    Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an

  20. Modelisation de photodetecteurs a base de matrices de diodes avalanche monophotoniques pour tomographie d'emission par positrons

    NASA Astrophysics Data System (ADS)

    Corbeil Therrien, Audrey

    La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a

  1. Free volume and phase transitions of 1-butyl-3-methylimidazolium based ionic liquids from positron lifetime spectroscopy.

    PubMed

    Yu, Yang; Beichel, Witali; Dlubek, Günter; Krause-Rehberg, Reinhard; Paluch, Marian; Pionteck, Jürgen; Pfefferkorn, Dirk; Bulut, Safak; Friedrich, Christian; Pogodina, Natalia; Krossing, Ingo

    2012-05-21

    Positron annihilation lifetime spectroscopy (PALS) was used to study a series of ionic liquids (ILs) with the 1-butyl-3-methylimidazolium cation ([C4MIM](+)) but different anions [Cl](-), [BF4](-), [PF6](-), [OTf](-), [NTf2](-), and [B(hfip)4](-) with increasing anion volumes. Changes of the ortho-positronium (o-Ps) lifetime parameters with temperature were observed for crystalline and amorphous (glass, supercooled, and normal liquid) states. Evidence for distinct phase transitions, e.g. melting, crystallization and solid-solid transitions, was observed in several PALS experiments. The o-Ps mean lifetime τ3 showed smaller values in the crystalline phase due to dense packing of the material compared to the amorphous phase. The o-Ps lifetime intensity I3 in the liquid state is clearly smaller than in the crystallized state. This behaviour can be attributed to a solvation of e(+) by the anions, which reduces the Ps formation probability in the normal and supercooled liquid. These phenomena were observed for the first time when applying the PALS technique to ionic liquids by us in one preliminary and in this work. Four of the ionic liquids investigated in this work ([BF4](-), [NTf2](-), [PF6](-) and [Cl](-) ILs) exhibit supercooled phases. The specific hole densities and occupied volumes of those ILs were obtained by comparing the local free volume with the specific volume from pressure-volume-temperature (PVT) experiments. From the o-Ps lifetime, the mean size vh of free volume holes of the four samples was calculated and compared with that calculated according to Fürth's hole theory. The hole volumes from both methods agree well. From the Cohen-Turnbull fitting of viscosity and conductivity against PALS/PVT results, the influence of the free volume on molecular transport properties was investigated.

  2. A fall prediction methodology for elderly based on a depth camera.

    PubMed

    Alazrai, Rami; Mowafi, Yaser; Hamad, Eyad

    2015-01-01

    With the aging of society population, efficient tracking of elderly activities of daily living (ADLs) has gained interest. Advancements of assisting computing and sensor technologies have made it possible to support elderly people to perform real-time acquisition and monitoring for emergency and medical care. In an earlier study, we proposed an anatomical-plane-based human activity representation for elderly fall detection, namely, motion-pose geometric descriptor (MPGD). In this paper, we present a prediction framework that utilizes the MPGD to construct an accumulated histograms-based representation of an ongoing human activity. The accumulated histograms of MPGDs are then used to train a set of support-vector-machine classifiers with a probabilistic output to predict fall in an ongoing human activity. Evaluation results of the proposed framework, using real case scenarios, demonstrate the efficacy of the framework in providing a feasible approach towards accurately predicting elderly falls. PMID:26737412

  3. Twenty-one degrees of freedom model based hand pose tracking using a monocular RGB camera

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jong-Il; Park, Hanhoon

    2016-01-01

    It is difficult to visually track a user's hand because of the many degrees of freedom (DOF) a hand has. For this reason, most model-based hand pose tracking methods have relied on the use of multiview images or RGB-D images. This paper proposes a model-based method that accurately tracks three-dimensional hand poses using monocular RGB images in real time. The main idea of the proposed method is to reduce hand tracking ambiguity by adopting a step-by-step estimation scheme consisting of three steps performed in consecutive order: palm pose estimation, finger yaw motion estimation, and finger pitch motion estimation. In addition, this paper proposes highly effective algorithms for each step. With the assumption that a human hand can be considered as an assemblage of articulated planes, the proposed method uses a piece-wise planar hand model which enables hand model regeneration. The hand model regeneration modifies the hand model to fit the current user's hand and improves the accuracy of the hand pose estimation results. Above all, the proposed method can operate in real time using only CPU-based processing. Consequently, it can be applied to various platforms, including egocentric vision devices such as wearable glasses. The results of several experiments conducted verify the efficiency and accuracy of the proposed method.

  4. TEQUILA: NIR camera/spectrograph based on a Rockwell 1024x1024 HgCdTe FPA

    NASA Astrophysics Data System (ADS)

    Ruiz, Elfego; Sohn, Erika; Cruz-Gonzales, Irene; Salas, Luis; Parraga, Antonio; Perez, Manuel; Torres, Roberto; Cobos Duenas, Francisco J.; Gonzalez, Gaston; Langarica, Rosalia; Tejada, Carlos; Sanchez, Beatriz; Iriarte, Arturo; Valdez, J.; Gutierrez, Leonel; Lazo, Francisco; Angeles, Fernando

    1998-08-01

    We describe the configuration and operation modes of the IR camera/spectrograph: TEQUILA based on a 1024 X 1024 HgCdTe FPA. The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN(subscript 2) dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An opto-mechanical assembly cooled to -30 degrees that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provision to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8m Mexican IR-Optical Telescope.

  5. Paper-based three-dimensional microfluidic device for monitoring of heavy metals with a camera cell phone.

    PubMed

    Wang, Hu; Li, Ya-jie; Wei, Jun-feng; Xu, Ji-run; Wang, Yun-hua; Zheng, Guo-xia

    2014-05-01

    A 3D paper-based microfluidic device has been developed for colorimetric determination of selected heavy metals in water samples by stacking layers of wax patterned paper and double-sided adhesive tape. It has the capability of wicking fluids and distributing microliter volumes of samples from single inlet into affrays of detection zones without external pumps, thus a range of metal assays can be simply and inexpensively performed. We demonstrate a prototype of four sample inlets for up to four heavy metal assays each, with detection limits as follows: Cu (II) = 0.29 ppm, Ni(II) = 0.33 ppm, Cd (II) = 0.19 ppm, and Cr (VI) = 0.35 ppm, which provided quantitative data that were in agreement with values gained from atomic absorption. It has the ability to identify these four metals in mixtures and is immune to interferences from either nontoxic metal ions such as Na(I) and K(I) or components found in reservoir or beach water. With the incorporation of a portable detector, a camera mobile phone, this 3D paper-based microfluidic device should be useful as a simple, rapid, and on-site screening approach of heavy metals in aquatic environments. PMID:24618990

  6. [The linear hyperspectral camera rotating scan imaging geometric correction based on the precise spectral sampling].

    PubMed

    Wang, Shu-min; Zhang, Ai-wu; Hu, Shao-xing; Wang, Jing-meng; Meng, Xian-gang; Duan, Yi-hao; Sun, Wei-dong

    2015-02-01

    As the rotation speed of ground based hyperspectral imaging system is too fast in the image collection process, which exceeds the speed limitation, there is data missed in the rectified image, it shows as the_black lines. At the same time, there is serious distortion in the collected raw images, which effects the feature information classification and identification. To solve these problems, in this paper, we introduce the each component of the ground based hyperspectral imaging system at first, and give the general process of data collection. The rotation speed is controlled in data collection process, according to the image cover area of each frame and the image collection speed of the ground based hyperspectral imaging system, And then the spatial orientation model is deduced in detail combining with the star scanning angle, stop scanning angle and the minimum distance between the sensor and the scanned object etc. The oriented image is divided into grids and resampled with new spectral. The general flow of distortion image corrected is presented in this paper. Since the image spatial resolution is different between the adjacent frames, and in order to keep the highest image resolution of corrected image, the minimum ground sampling distance is employed as the grid unit to divide the geo-referenced image. Taking the spectral distortion into account caused by direct sampling method when the new uniform grids and the old uneven grids are superimposed to take the pixel value, the precise spectral sampling method based on the position distribution is proposed. The distortion image collected in Lao Si Cheng ruin which is in the Zhang Jiajie town Hunan province is corrected through the algorithm proposed on above. The features keep the original geometric characteristics. It verifies the validity of the algorithm. And we extract the spectral of different features to compute the correlation coefficient. The results show that the improved spectral sampling method is

  7. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  8. Imaging camera system of OYGBR-phosphor-based white LED lighting

    NASA Astrophysics Data System (ADS)

    Kobashi, Katsuya; Taguchi, Tsunemasa

    2005-03-01

    The near-ultraviolet (nUV) white LED approach is analogous to three-color fluorescent lamp technology, which is based on the conversion of nUV radiation to visible light via the photoluminescence process in phosphor materials. The nUV light is not included in the white light generation from nUV-based white LED devices. This technology can thus provide a higher quality of white light than the blue and YAG method. A typical device demonstrates white luminescence with Tc=3,700 K, Ra > 93, K > 40 lm/W and chromaticity (x, y) = (0.39, 0.39), respectively. The orange, yellow, green and blue OYGB) or orange, yellow, red, green and blue (OYRGB) device shows a luminescence spectrum broader than of an RGB white LED and a better color rendering index. Such superior luminous characteristics could be useful for the application of several kinds of endoscope. We have shown the excellent pictures of digestive organs in a stomach of a dog due to the strong green component and high Ra.

  9. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  10. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    SciTech Connect

    Garcia, Marie-Paule Villoing, Daphnée; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-12-15

    computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.

  11. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    NASA Astrophysics Data System (ADS)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  12. Deriving hydraulic roughness from camera-based high resolution topography in field and laboratory experiments

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Neugirg, Fabian; Ebert, Louisa; Haas, Florian; Schmidt, Jürgen; Becht, Michael; Schindewolf, Marcus

    2016-04-01

    The hydraulic roughness, represented by Manning's n, is an essential input parameter in physically based soil erosion modeling. In order to acquire the roughness values for certain areas, on-site flow experiments have to be carried out. These results are influenced by the selection of the location of the test plot and are thereby based on the subjectiveness of the researchers. The study aims on the methodological development to acquire Manning's n by creating very high-resolution surface models with structure-from-motion approaches. Data acquisition took place during several field experiments in the Lainbach valley, southern Germany, and on agricultural sites in Saxony, eastern Germany, and in central Brazil. Rill and interrill conditions were simulated by flow experiments. In order to validate our findings stream velocity as an input for the manning equation was measured with coloured dye. Grain and aggregate sizes were derived by measuring distances from a best fit line to the reconstructed soil surface. Several diameters from D50 to D90 were tested with D90 showing best correlation between tracer experiments and photogrammetrically acquired data. A variety of roughness parameters were tested (standard deviation, random roughness, Garbrecht's n and D90). Best agreement in between the particle size and the hydraulic roughness was achieved with a non-linear sigmoid function and D90 rather than with the Garbrecht equation or statistical parameters. To consolidate these findings a laboratory setup was created to reproduce field data under controlled conditions, excluding unknown influences like infiltration and changes in surface morphology by erosion.

  13. A Kinect(™) camera based navigation system for percutaneous abdominal puncture.

    PubMed

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is

  14. Change-based threat detection in urban environments with a forward-looking camera

    NASA Astrophysics Data System (ADS)

    Morton, Kenneth, Jr.; Ratto, Christopher; Malof, Jordan; Gunter, Michael; Collins, Leslie; Torrione, Peter

    2012-06-01

    Roadside explosive threats continue to pose a significant risk to soldiers and civilians in conflict areas around the world. These objects are easy to manufacture and procure, but due to their ad hoc nature, they are difficult to reliably detect using standard sensing technologies. Although large roadside explosive hazards may be difficult to conceal in rural environments, urban settings provide a much more complicated background where seemingly innocuous objects (e.g., piles of trash, roadside debris) may be used to obscure threats. Since direct detection of all innocuous objects would flag too many objects to be of use, techniques must be employed to reduce the number of alarms generated and highlight only a limited subset of possibly threatening regions for the user. In this work, change detection techniques are used to reduce false alarm rates and increase detection capabilities for possible threat identification in urban environments. The proposed model leverages data from multiple video streams collected over the same regions by first applying video aligning and then using various distance metrics to detect changes based on image keypoints in the video streams. Data collected at an urban warfare simulation range at an Eastern US test site was used to evaluate the proposed approach, and significant reductions in false alarm rates compared to simpler techniques are illustrated.

  15. Two low-cost digital camera-based platforms for quantitative creatinine analysis in urine.

    PubMed

    Debus, Bruno; Kirsanov, Dmitry; Yaroshenko, Irina; Sidorova, Alla; Piven, Alena; Legin, Andrey

    2015-10-01

    In clinical analysis creatinine is a routine biomarker for the assessment of renal and muscular dysfunctions. Although several techniques have been proposed for a fast and accurate quantification of creatinine in human serum or urine, most of them require expensive or complex apparatus, advanced sample preparation or skilled operators. To circumvent these issues, we propose two home-made platforms based on a CD Spectroscope (CDS) and Computer Screen Photo-assisted Technique (CSPT) for the rapid assessment of creatinine level in human urine. Both systems display a linear range (r(2) = 0.9967 and 0.9972, respectively) from 160 μmol L(-1) to 1.6 mmol L(-1) for standard creatinine solutions (n = 15) with respective detection limits of 89 μmol L(-1) and 111 μmol L(-1). Good repeatability was observed for intra-day (1.7-2.9%) and inter-day (3.6-6.5%) measurements evaluated on three consecutive days. The performance of CDS and CSPT was also validated in real human urine samples (n = 26) using capillary electrophoresis data as reference. Corresponding Partial Least-Squares (PLS) regression models provided for mean relative errors below 10% in creatinine quantification.

  16. Heads up and camera down: a vision-based tracking modality for mobile mixed reality.

    PubMed

    DiVerdi, Stephen; Höllerer, Tobias

    2008-01-01

    Anywhere Augmentation pursues the goal of lowering the initial investment of time and money necessary to participate in mixed reality work, bridging the gap between researchers in the field and regular computer users. Our paper contributes to this goal by introducing the GroundCam, a cheap tracking modality with no significant setup necessary. By itself, the GroundCam provides high frequency, high resolution relative position information similar to an inertial navigation system, but with significantly less drift. We present the design and implementation of the GroundCam, analyze the impact of several design and run-time factors on tracking accuracy, and consider the implications of extending our GroundCam to different hardware configurations. Motivated by the performance analysis, we developed a hybrid tracker that couples the GroundCam with a wide area tracking modality via a complementary Kalman filter, resulting in a powerful base for indoor and outdoor mobile mixed reality work. To conclude, the performance of the hybrid tracker and its utility within mixed reality applications is discussed.

  17. Two low-cost digital camera-based platforms for quantitative creatinine analysis in urine.

    PubMed

    Debus, Bruno; Kirsanov, Dmitry; Yaroshenko, Irina; Sidorova, Alla; Piven, Alena; Legin, Andrey

    2015-10-01

    In clinical analysis creatinine is a routine biomarker for the assessment of renal and muscular dysfunctions. Although several techniques have been proposed for a fast and accurate quantification of creatinine in human serum or urine, most of them require expensive or complex apparatus, advanced sample preparation or skilled operators. To circumvent these issues, we propose two home-made platforms based on a CD Spectroscope (CDS) and Computer Screen Photo-assisted Technique (CSPT) for the rapid assessment of creatinine level in human urine. Both systems display a linear range (r(2) = 0.9967 and 0.9972, respectively) from 160 μmol L(-1) to 1.6 mmol L(-1) for standard creatinine solutions (n = 15) with respective detection limits of 89 μmol L(-1) and 111 μmol L(-1). Good repeatability was observed for intra-day (1.7-2.9%) and inter-day (3.6-6.5%) measurements evaluated on three consecutive days. The performance of CDS and CSPT was also validated in real human urine samples (n = 26) using capillary electrophoresis data as reference. Corresponding Partial Least-Squares (PLS) regression models provided for mean relative errors below 10% in creatinine quantification. PMID:26454461

  18. Effect of {sup 11}C-Methionine-Positron Emission Tomography on Gross Tumor Volume Delineation in Stereotactic Radiotherapy of Skull Base Meningiomas

    SciTech Connect

    Astner, Sabrina T. Dobrei-Ciuchendea, Mihaela; Essler, Markus; Bundschuh, Ralf A.; Sai, Heitetsu; Schwaiger, Markus; Molls, Michael; Weber, Wolfgang A.; Grosu, Anca-Ligia

    2008-11-15

    Purpose: To evaluate the effect of trimodal image fusion using computed tomography (CT), magnetic resonance imaging (MRI) and {sup 11}C-methionine positron emission tomography (MET-PET) for gross tumor volume delineation in fractionated stereotactic radiotherapy of skull base meningiomas. Patients and Methods: In 32 patients with skull base meningiomas, the gross tumor volume (GTV) was outlined on CT scans fused to contrast-enhanced MRI (GTV-MRI/CT). A second GTV, encompassing the MET-PET positive region only (GTV-PET), was generated. The additional information obtained by MET-PET concerning the GTV delineation was evaluated using the PET/CT/MRI co-registered images. The sizes of the overlapping regions of GTV-MRI/CT and GTV-PET were calculated and the amounts of additional volumes added by the complementing modality determined. Results: The addition of MET-PET was beneficial for GTV delineation in all but 3 patients. MET-PET detected small tumor portions with a mean volume of 1.6 {+-} 1.7 cm{sup 3} that were not identified by CT or MRI. The mean percentage of enlargement of the GTV using MET-PET as an additional imaging method was 9.4% {+-} 10.7%. Conclusions: Our data have demonstrated that integration of MET-PET in radiotherapy planning of skull base meningiomas can influence the GTV, possibly resulting in an increase, as well as in a decrease.

  19. Matching the best viewing angle in depth cameras for biomass estimation based on poplar seedling geometry.

    PubMed

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-06-04

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  20. Matching the Best Viewing Angle in Depth Cameras for Biomass Estimation Based on Poplar Seedling Geometry

    PubMed Central

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-01-01

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (−45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and −45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  1. Generation of monoenergetic positrons

    SciTech Connect

    Hulett, L.D. Jr.; Dale, J.M.; Miller, P.D. Jr.; Moak, C.D.; Pendyala, S.; Triftshaeuser, W.; Howell, R.H.; Alvarez, R.A.

    1983-01-01

    Many experiments have been performed in the generation and application of monoenergetic positron beams using annealed tungsten moderators and fast sources of /sup 58/Co, /sup 22/Na, /sup 11/C, and LINAC bremstrahlung. This paper will compare the degrees of success from our various approaches. Moderators made from both single crystal and polycrystal tungsten have been tried. Efforts to grow thin films of tungsten to be used as transmission moderators and brightness enhancement devices are in progress.

  2. Tumor Delineation Based on Time-Activity Curve Differences Assessed With Dynamic Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography in Rectal Cancer Patients

    SciTech Connect

    Janssen, Marco Aerts, Hugo; Ollers, Michel C.; Bosmans, Geert; Lee, John A.; Buijsen, Jeroen; Ruysscher, Dirk de; Lambin, Philippe; Lammering, Guido; Dekker, Andre L.A.J.

    2009-02-01

    Purpose: To develop an unsupervised tumor delineation method based on time-activity curve (TAC) shape differences between tumor tissue and healthy tissue and to compare the resulting contour with the two tumor contouring methods mostly used nowadays. Methods and Materials: Dynamic positron emission tomography-computed tomography (PET-CT) acquisition was performed for 60 min starting directly after fluorodeoxyglucose (FDG) injection. After acquisition and reconstruction, the data were filtered to attenuate noise. Correction for tissue motion during acquisition was applied. For tumor delineation, the TAC slope values were k-means clustered into two clusters. The resulting tumor contour (Contour I) was compared with a contour manually drawn by the radiation oncologist (Contour II) and a contour generated using a threshold of the maximum standardized uptake value (SUV; Contour III). Results: The tumor volumes of Contours II and III were significantly larger than the tumor volumes of Contour I, with both Contours II and III containing many voxels showing flat TACs at low activities. However, in some cases, Contour II did not cover all voxels showing upward TACs. Conclusion: Both automated SUV contouring and manual tumor delineation possibly incorrectly assign healthy tissue, showing flat TACs, as being malignant. On the other hand, in some cases the manually drawn tumor contours do not cover all voxels showing steep upward TACs, suspected to be malignant. Further research should be conducted to validate the possible superiority of tumor delineation based on dynamic PET analysis.

  3. Vision-based control of holonomic robots for 3-dimensional rigid-body positioning using camera-space manipulation

    NASA Astrophysics Data System (ADS)

    Chen, Wenzong

    Camera-space manipulation was developed in this work for 3-dimensional 6-degree-of-freedom rigid-body positioning tasks with unknown work piece position and orientation. Using standard imaging devices and the very large GMF S-400 manipulator, highly precise manuever precision was achieved with negligible passive compliance. The maneuver succeeded consistently within a large range of work piece position and orientation provided the piece remained in the cameras' fields of view. The maneuver precision was further improved by accounting for the perspective effect in the camera-space locations of visually-detected cues painted on the objects to be positioned, using an iterative procedure that we devised in this work. The application of this procedure also increased the range of the work piece position and orientation within which the maneuver succeeded consistently. Also developed in this work was an iterative method for the estimation for grasp uncertainty in rigid-body positioning with camera-space manipulation. This added capability of camera-space manipulation allowed rigid-body positioning tasks to be accomplished with both unknown work piece position and orientation and unknown grasp.

  4. Quantum resonances in reflection of relativistic electrons and positrons

    NASA Astrophysics Data System (ADS)

    Eykhorn, Yu. L.; Korotchenko, K. B.; Pivovarov, Yu. L.; Takabayashi, Y.

    2015-07-01

    Calculations based on the use of realistic potential of the system of crystallographic planes confirm earlier results on existence of resonances in reflection of relativistic electrons and positrons by the crystal surface, if the crystallographic planes are parallel to the surface.The physical reason of predicted phenomena, similar to the band structure of transverse energy levels, is connected with the Bloch form of the wave functions of electrons (positrons) near the crystallographic planes, which appears both in the case of planar channeling of relativistic electrons (positrons) and in reflection by a crystal surface. Calculations show that positions of maxima in reflection of relativistic electrons and positrons by crystal surface specifically depend on the angle of incidence with respect to the crystal surface and relativistic factor of electrons/positrons. These maxima form the Darwin tables similar to that in ultra-cold neutron diffraction.

  5. Development of a Positron Source for JLab at the IAC

    SciTech Connect

    Forest, Tony

    2013-10-12

    We report on the research performed towards the development of a positron sour for Jefferson Lab's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) in Newport News, VA. The first year of work was used to benchmark the predictions of our current simulation with positron production efficiency measurements at the IAC. The second year used the benchmarked simulation to design a beam line configuration which optimized positron production efficiency while minimizing radioactive waste as well as design and construct a positron converter target. The final year quantified the performance of the positron source. This joint research and development project brought together the experiences of both electron accelerator facilities. Our intention is to use the project as a spring board towards developing a program of accelerator based research and education which will train students to meet the needs of both facilities as well as provide a pool of trained scientists.

  6. Development of Texas intense positron source

    NASA Astrophysics Data System (ADS)

    Köymen, A. R.; Ünlü, K.; Jacobsen, F. M.; Göktepeli, S.; Wehring, B. W.

    1999-02-01

    The Texas Intense Positron Source (TIPS) is a reactor-based low-energy positron beam facility utilizing some novel techniques in positron beam production. This facility will be located at the University of Texas (UT) at Austin Nuclear Engineering Teaching Laboratory (NETL) and is being developed by UT Austin and UT Arlington researchers. TIPS will use a large area (total area of 900-1800 cm 2) 64Cu source to supply fast β + particles for subsequent moderation to form an intense monoenergetic positron beam in the energy range of 0-50 keV with an expected intensity of 10 8 e +/s. Natural copper will be neutron activated near the core of the NETL 1 MW TRIGA Mark II research reactor to produce the 64Cu isotope. The activated source will be transported to the moderator/remoderator assembly, outside the biological shield of the reactor. This assembly combines the primary moderation and posterior remoderation of the fast β + particles into one stage using solid Kr to produce a low-energy positron source of a few eV with a diameter of 8 mm. The low-energy positron beam is then extracted by an electrostatic modified SOA gun and after further acceleration to 5 keV, the beam is focused onto the object slit of a 90° bending magnet. After further focusing and another 90° bend, the beam enters the main accelerator/decelerator that transports the beam onto the target for experimentation. The components of TIPS have been manufactured and are currently being optimized. In this communication we present some of the details of the TIPS facility and furthermore briefly discuss its intended applications.

  7. Formation of a high intensity low energy positron string

    NASA Astrophysics Data System (ADS)

    Donets, E. D.; Donets, E. E.; Syresin, E. M.; Itahashi, T.; Dubinov, A. E.

    2004-05-01

    The possibility of a high intensity low energy positron beam production is discussed. The proposed Positron String Trap (PST) is based on the principles and technology of the Electron String Ion Source (ESIS) developed in JINR during the last decade. A linear version of ESIS has been used successfully for the production of intense highly charged ion beams of various elements. Now the Tubular Electron String Ion Source (TESIS) concept is under study and this opens really new promising possibilities in physics and technology. In this report, we discuss the application of the tubular-type trap for the storage of positrons cooled to the cryogenic temperatures of 0.05 meV. It is intended that the positron flux at the energy of 1-5 eV, produced by the external source, is injected into the Tubular Positron Trap which has a similar construction as the TESIS. Then the low energy positrons are captured in the PST Penning trap and are cooled down because of their synchrotron radiation in the strong (5-10 T) applied magnetic field. It is expected that the proposed PST should permit storing and cooling to cryogenic temperature of up to 5×109 positrons. The accumulated cooled positrons can be used further for various physics applications, for example, antihydrogen production.

  8. Positron lifetime spectrometer using a DC positron beam

    DOEpatents

    Xu, Jun; Moxom, Jeremy

    2003-10-21

    An entrance grid is positioned in the incident beam path of a DC beam positron lifetime spectrometer. The electrical potential difference between the sample and the entrance grid provides simultaneous acceleration of both the primary positrons and the secondary electrons. The result is a reduction in the time spread induced by the energy distribution of the secondary electrons. In addition, the sample, sample holder, entrance grid, and entrance face of the multichannel plate electron detector assembly are made parallel to each other, and are arranged at a tilt angle to the axis of the positron beam to effectively separate the path of the secondary electrons from the path of the incident positrons.

  9. Ground-based search for the brightest transiting planets with the Multi-site All-Sky CAmeRA: MASCARA

    NASA Astrophysics Data System (ADS)

    Snellen, Ignas A. G.; Stuik, Remko; Navarro, Ramon; Bettonvil, Felix; Kenworthy, Matthew; de Mooij, Ernst; Otten, Gilles; ter Horst, Rik; le Poole, Rudolf

    2012-09-01

    The Multi-site All-sky CAmeRA MASCARA is an instrument concept consisting of several stations across the globe, with each station containing a battery of low-cost cameras to monitor the near-entire sky at each location. Once all stations have been installed, MASCARA will be able to provide a nearly 24-hr coverage of the complete dark sky, down to magnitude 8, at sub-minute cadence. Its purpose is to find the brightest transiting exoplanet systems, expected in the V=4-8 magnitude range - currently not probed by space- or ground-based surveys. The bright/nearby transiting planet systems, which MASCARA will discover, will be the key targets for detailed planet atmosphere observations. We present studies on the initial design of a MASCARA station, including the camera housing, domes, and computer equipment, and on the photometric stability of low-cost cameras showing that a precision of 0.3-1% per hour can be readily achieved. We plan to roll out the first MASCARA station before the end of 2013. A 5-station MASCARA can within two years discover up to a dozen of the brightest transiting planet systems in the sky.

  10. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    NASA Astrophysics Data System (ADS)

    Harrild, Martin; Webley, Peter; Dehn, Jonathan

    2015-04-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  11. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    NASA Astrophysics Data System (ADS)

    Harrild, M.; Webley, P.; Dehn, J.

    2014-12-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  12. A long-range camera based on an HD MCT array of 12μm pixels

    NASA Astrophysics Data System (ADS)

    Davy, D.; Ashley, S.; Davison, B.; Ashcroft, A.; McEwen, R. K.; Moore, R.

    2014-06-01

    The development of a new thermal imaging camera, for long range surveillance applications, is described together with the enabling technology. Previous publications have described the development of large arrays of 12μm pixels using Metal Organic Vapour Phase Epitaxy (MOVPE) grown Mercury Cadmium Telluride (MCT) for wide area surveillance applications. This technology has been leveraged to produce the low cost 1280×720 pixel Medium Wave IR focal plane array at the core of the new camera. Also described is the newly developed, high performance, ×12 continuous zoom lens which, together with the detector, achieves an Instantaneous Field of View (IFOV) of 12.5μrad/pixel enabling long detection, recognition and identification ranges. Novel image processing features, including the turbulence mitigation algorithms deployed in the camera processing electronics, are also addressed. Resultant imagery and performance will be presented.

  13. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  14. HHEBBES! All sky camera system: status update

    NASA Astrophysics Data System (ADS)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  15. The calibration of cellphone camera-based colorimetric sensor array and its application in the determination of glucose in urine.

    PubMed

    Jia, Ming-Yan; Wu, Qiong-Shui; Li, Hui; Zhang, Yu; Guan, Ya-Feng; Feng, Liang

    2015-12-15

    In this work, a novel approach that can calibrate the colors obtained with a cellphone camera was proposed for the colorimetric sensor array. The variations of ambient light conditions, imaging positions and even cellphone brands could all be compensated via taking the black and white backgrounds of the sensor array as references, thereby yielding accurate measurements. The proposed calibration approach was successfully applied to the detection of glucose in urine by a colorimetric sensor array. Snapshots of the glucose sensor array by a cellphone camera were calibrated by the proposed compensation method and the urine samples at different glucose concentrations were well discriminated with no confusion after a hierarchical clustering analysis.

  16. Project INCA: Engineering of NIR (1-5 micron) cameras for ground and space-based astronomical applications

    NASA Astrophysics Data System (ADS)

    Zerbi, Filippo M.; Dalla Vedova, Florio; Batocchio, Alessandro; Canetti, Marco; Cerabolini, Paolo; Colombo, Giorgio; Ghigo, Mauro; Mazzoleni, Ruben; Pennestri, Giuseppe

    2003-03-01

    We present a status report of the R&D project INCA, aimed to create in a selected group of Italian SME the expertise to build high quality Infrared instrumentation for Astronomy. The INCA consortium is currently building a fully functional test NIR-camera (1-5 μm) exploring any sort of innovation in the field of optics (new materials, aspheric surfaces), mechanics/cryogenics (new concept in lens holding, light-weighted structures, cryo-pumps) and electronics (new chip controllers). The camera will be installed for testing at one of the major telescope facility available to the Italian community and its performances in true astronomical applications evaluated.

  17. Nonlinear positron acoustic solitary waves

    SciTech Connect

    Tribeche, Mouloud; Aoutou, Kamel; Younsi, Smain; Amour, Rabia

    2009-07-15

    The problem of nonlinear positron acoustic solitary waves involving the dynamics of mobile cold positrons is addressed. A theoretical work is presented to show their existence and possible realization in a simple four-component plasma model. The results should be useful for the understanding of the localized structures that may occur in space and laboratory plasmas as new sources of cold positrons are now well developed.

  18. Positrons observed to originate from thunderstorms

    NASA Astrophysics Data System (ADS)

    Fishman, Gerald J.

    2011-05-01

    Thunderstorms are the result of warm, moist air moving rapidly upward, then cooling and condensing. Electrification occurs within thunderstorms (as noted by Benjamin Franklin), produced primarily by frictional processes among ice particles. This leads to lightning discharges; the types, intensities, and rates of these discharges vary greatly among thunderstorms. Even though scientists have been studying lightning since Franklin's time, new phenomena associated with thunderstorms are still being discovered. In particular, a recent finding by Briggs et al. [2011], based on observations by the Gamma-Ray Burst Monitor (GBM) instrument on NASA's satellite-based Fermi Gamma-ray Space Telescope (Fermi), shows that positrons are also generated by thunderstorms. Positrons are the antimatter form of electrons—they have the same mass and charge as an electron but are of positive rather than negative charge; hence the name positron. Observations of positrons from thunderstorms may lead to a new tool for understanding the electrification and high-energy processes occurring within thunderstorms. New theories, along with new observational techniques, are rapidly evolving in this field.

  19. The ATLAS Positron Experiment -- APEX

    SciTech Connect

    Ahmad, I.; Back, B.B.; Betts, R.R.; Dunford, R.; Kutschera, W.; Rhein, M.D.; Schiffer, J.P.; Wilt, P.; Wuosmaa, A.; Austin, S.M.; Kashy, E.; Winfield, J.S.; Yurkon, J.E.; Bazin, D.; Calaprice, F.P.; Young, A.; Chan, K.C.; Chisti, A.; Chowhury, P.; Greenberg, J.S.; Kaloskamis, N.; Lister, C.J.; Fox, J.D.; Roa, E.; Freedman, S.; Maier, M.R.; Freer, M.; Gazes, S.; Hallin, A.L.; Liu, M.; Happ, T.; Perera, A.; Wolfs, F.L.H.; Trainor, T.; Wolanski, M. |

    1994-03-01

    APEX -- the ATLAS Positron Experiment -- is designed to measure electrons and positrons emitted in heavy-ion collisions. Its scientific goal is to gain insight into the puzzling positron-line phenomena observed at the GSI Darmstadt. It is in operation at the ATLAS accelerator at Argonne National Lab. The assembly of the apparatus is finished and beginning 1993 the first positrons produced in heavy-ion collisions were observed. The first full scale experiment was carried out in December 1993, and the data are currently being analyzed. In this paper, the principles of operation are explained and a status report on the experiment is given.

  20. Laser Created Relativistic Positron Jets

    SciTech Connect

    Chen, H; Wilks, S C; Meyerhofer, D D; Bonlie, J; Chen, C D; Chen, S N; Courtois, C; Elberson, L; Gregori, G; Kruer, W; Landoas, O; Mithen, J; Murphy, C; Nilson, P; Price, D; Scheider, M; Shepherd, R; Stoeckl, C; Tabak, M; Tommasini, R; Beiersdorder, P

    2009-10-08

    Electron-positron jets with MeV temperature are thought to be present in a wide variety of astrophysical phenomena such as active galaxies, quasars, gamma ray bursts and black holes. They have now been created in the laboratory in a controlled fashion by irradiating a gold target with an intense picosecond duration laser pulse. About 10{sup 11} MeV positrons are emitted from the rear surface of the target in a 15 to 22-degree cone for a duration comparable to the laser pulse. These positron jets are quasi-monoenergetic (E/{delta}E {approx} 5) with peak energies controllable from 3-19 MeV. They have temperatures from 1-4 MeV in the beam frame in both the longitudinal and transverse directions. Positron production has been studied extensively in recent decades at low energies (sub-MeV) in areas related to surface science, positron emission tomography, basic antimatter science such as antihydrogen experiments, Bose-Einstein condensed positronium, and basic plasma physics. However, the experimental tools to produce very high temperature positrons and high-flux positron jets needed to simulate astrophysical positron conditions have so far been absent. The MeV temperature jets of positrons and electrons produced in our experiments offer a first step to evaluate the physics models used to explain some of the most energetic phenomena in the universe.