Science.gov

Sample records for camera based positron

  1. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image...

  2. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  3. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  4. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  5. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  6. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation

  7. A simple data loss model for positron camera systems

    SciTech Connect

    Eriksson, L. . Dept. of Clinical Neurophysiology); Wienhard, K. ); Dahlbom, M. . School of Medicine)

    1994-08-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system.

  8. A preliminary evaluation of a positron camera system using weighted decoding of individual crystals

    SciTech Connect

    Holtel, S.; Eriksson, L.; Larsson, J.E.; Ericson, T.; Stjernberg, H.; Hansen, P.; Bohm, C.; Kesselberg, M.; Rota, E.; Herzog, H.

    1988-02-01

    The whole-body positron camera PC4096-15WB is now in operation. It is based on a detection unit with sixteen scintillating crystals mounted on two dual photomultiplier tubes. The design ideas are specified and a system description is given. Preliminary test results including spatial resolution, sensitivity to true and random coincidences, scatter correction, and count rate linearity are presented.

  9. Positron camera using position-sensitive detectors: PENN-PET

    SciTech Connect

    Muehllehner, G.; Karp, J.S.

    1986-01-01

    A single-slice positron camera has been developed with good spatial resolution and high count rate capability. The camera uses a hexagonal arrangement of six position-sensitive NaI(Tl) detectors. The count rate capability of NaI(Tl) was extended to 800k cps through the use of pulse shortening. In order to keep the detectors stationary, an iterative reconstruction algorithm was modified which ignores the missing data in the gaps between the six detectors and gives artifact-free images. The spatial resolution, as determined from the image of point sources in air, is 6.5 mm full width at half maximum. We have also imaged a brain phantom and dog hearts.

  10. Development of the LBNL positron emission mammography camera

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Wang, Jimmy; Maltz, Jonathon S.; Qi, Jinyi; Mandelli, Emanuele; Moses, William W.

    2002-12-19

    We present the construction status of the LBNL Positron Emission Mammography (PEM) camera, which utilizes a PET detector module with depth of interaction measurement consisting of 64 LSO crystals (3x3x30 mm3) coupled on one end to a single photomultiplier tube (PMT) and on the opposite end to a 64 pixel array of silicon photodiodes (PDs). The PMT provides an accurate timing pulse, the PDs identify the crystal of interaction, the sum provides a total energy signal, and the PD/(PD+PMT) ratio determines the depth of interaction. We have completed construction of all 42 PEM detector modules. All data acquisition electronics have been completed, fully tested and loaded onto the gantry. We have demonstrated that all functions of the custom IC work using the production rigid-flex boards and data acquisition system. Preliminary detector module characterization and coincidence data have been taken using the production system, including initial images.

  11. Evaluation of the Karolinska new positron camera system; The Scanitronix PC2048-15B

    SciTech Connect

    Litton, J.E.; Holte, S.; Eroksson, L. )

    1990-04-01

    The PC2048-15B is the brain version of the new generation of Scanditronix positron camera systems, now being installed at the Karolinska Hospital in Stockholm, Sweden. The system has eight rings with 256 6*12*30 mm{sup 3} BGO crystals per ring. Data corresponding to fifteen image slices are simultaneously recorded. The detector rings are based on block-detector units, each with sixteen scintillating crystals mounted on two dual photomultiplier tubes. A system description is given and test results including spatial resolution, sensitivity to true and random coincidences, scatter correction, count rate linearity, spatial independence and reproducibility are presented. Clinical results are discussed, including rCBF studies and receptor studies, to cover the two most extreme situations met in brain PERT studies, i.e., high count rate and high resolution requirements.

  12. The electronics system for the LBNL positron emission tomography (PEM) camera

    SciTech Connect

    Moses, W.W.; Young, J.W.; Baker, K.; Jones, W.; Lenox, M.; Ho, M.H.; Weng, M.

    2000-11-04

    We describe the electronics for a high performance Positron Emission Mammography (PEM) camera. It is based on the electronics for a human brain PET camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An ASIC services the PD array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal by crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs makes the overall design extremely flexible, allowing many different functions (or design modifications) to be realized without hardware changes. Incorporation of extensive onboard diagnostics, implemented in the FPGAs, is required by the very high level of integration and density achieved by this system.

  13. Development of a high resolution beta camera for a direct measurement of positron distribution on brain surface

    SciTech Connect

    Yamamoto, S.; Seki, C.; Kashikura, K.

    1996-12-31

    We have developed and tested a high resolution beta camera for a direct measurement of positron distribution on brain surface of animals. The beta camera consists of a thin CaF{sub 2}(Eu) scintillator, a tapered fiber optics plate (taper fiber) and a position sensitive photomultiplier tube (PSPMT). The taper fiber is the key component of the camera. We have developed two types of beta cameras. One is 20mm diameter field of view camera for imaging brain surface of cats. The other is 10mm diameter camera for that of rats. Spatial resolutions of beta camera for cats and rats were 0.8mm FWHM and 0.5mm FWHM, respectively. We confirmed that developed beta cameras may overcome the limitation of the spatial resolution of the positron emission tomography (PET).

  14. Figures of merit for different detector configurations utilized in high resolution positron cameras

    SciTech Connect

    Eriksson, L.; Bergstrom, M.; Rohm, C.; Holte, S.; Kesselberg, M.

    1986-02-01

    A new positron camera system is currently being designed. The goal is an instrument that can measure the whole brain with a spatial resolution of 5 mm FWHM in all directions. In addition to the high spatial resolution, the system must be able to handle count rates of 0.5 MHz or more in order to perform accurate fast dynamic function studies such as the determination of cerebral blood flow and cerebral oxygen consumption following a rapid bolus. An overall spatial resolution of 5 mm requires crystal dimensions of 6 x 6 x L mm/sup 3/, or less, L being the length of the crystal. Timing and energy requirements necessitate high performance photomultipliers. The identification of the small size scintillator crystals can currently only be handled in schemes based on the Anger technique, in the future possibly with photodiodes. In the present work different crystal identification schemes have been investigated. The investigations have involved efficiency measurements of different scintillators, line spread function studies and the evaluation of different coding schemes in order to identify small crystals.

  15. A CF4 based positron trap

    NASA Astrophysics Data System (ADS)

    Marjanovic, Srdjan; Bankovic, Ana; Dujko, Sasa; Deller, Adam; Cooper, Ben; Cassidy, David; Petrovic, Zoran

    2016-05-01

    All positron buffer gas traps in use rely on N2 as the primary trapping gas due to its conveniently placed a1 Π electronic excitation cross section that is large enough to compete with positronium (Ps) formation in the threshold region. Its energy loss of 8.5 eV is sufficient to capture positrons into a potential well upon a single collision. The competing Ps formation, however, limits the efficiency of the two stage trap to 25 %. As positron moderators produce beams with energies of several eV we have proposed to use CF4 in the first stage of the trap, due to its large vibrational excitation cross section, where several vibrational excitations would be sufficient to trap the positrons with small losses. Apart from the simulations we also report the results of attempts to apply this approach to an existing Surko-type positron trap. Operating the unmodified trap as a CF4 based device proved to be unsuccessful, due primarily to excessive scattering due to high CF4 pressure in the first stage. However, the performance was consistent with subsequent simulations using the real system parameters. This agreement indicates that an efficient CF4 based scheme may be realized in an appropriately designed trap. also at Serbian Academy of Sciences and Arts, Knez Mihajlova 35, 11000 Belgrade, Serbia.

  16. Design of POSICAM: A high resolution multislice whole body positron camera

    SciTech Connect

    Mullani, N.A.; Wong, W.H.; Hartz, R.K.; Bristow, D.; Gaeta, J.M.; Yerian, K.; Adler, S.; Gould, K.L.

    1985-01-01

    A high resolution (6mm), multislice (21) whole body positron camera has been designed with innovative detector and septa arrangement for 3-D imaging and tracer quantitation. An object of interest such as the brain and the heart is optimally imaged by the 21 simultaneous image planes which have 12 mm resolution and are separated by 5.5 mm to provide adequate sampling in the axial direction. The detector geometry and the electronics are flexible enough to allow BaF/sub 2/, BGO, GSO or time of flight BaF/sub 2/ scintillators. The mechanical gantry has been designed for clinical applications and incorporates several features for patient handling and comfort. A large patient opening of 58 cm diameter with a tilt of +-30/sup 0/ and rotation of +-20/sup 0/ permit imaging from different positions without moving the patient. Multiprocessor computing systems and user-friendly software make the POSICAM a powerful 3-D imaging device. 7 figs.

  17. Color measurements based on a color camera

    NASA Astrophysics Data System (ADS)

    Marszalec, Elzbieta A.; Pietikaeinen, Matti

    1997-08-01

    The domain of color camera applications is increasing all time due to recent progress in color machine vision research. Colorimetric measurement tasks are quite complex as the purpose of color measurement is to provide a quantitative evaluation of the phenomenon of colors as perceived by human vision. A proper colorimetric calibration of the color camera system is needed in order to make color a practical tool in machine vision. This paper discuses two approaches to color measurements based on a color camera and includes an overview of practical approaches to color camera calibration under unstable illumination conditions.

  18. Scatter correction in scintillation camera imaging of positron-emitting radionuclides

    SciTech Connect

    Ljungberg, M.; Danfelter, M.; Strand, S.E.

    1996-12-31

    The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.

  19. The underwater camera calibration based on virtual camera lens distortion

    NASA Astrophysics Data System (ADS)

    Qin, Dahui; Mao, Ting; Cheng, Peng; Zhang, Zhiliang

    2011-08-01

    The machine view is becoming more and more popular in underwater. It is a challenge to calibrate the camera underwater because of the complicated light ray path in underwater and air environment. In this paper we firstly analyzed characteristic of the camera when light transported from air to water. Then we proposed a new method that takes the high-level camera distortion model to compensate the deviation of the light refraction when light ray come through the water and air media. In the end experience result shows the high-level distortion model can simulate the effect made by the underwater light refraction which also makes effect on the camera's image in the process of the camera underwater calibration.

  20. Dynamic positron computed tomography of the heart with a high sensitivity positron camera and nitrogen-13 ammonia

    SciTech Connect

    Tamaki, N.; Senda, M.; Yonekura, Y.; Saji, H.; Kodama, S.; Konishi, Y.; Ban, T.; Kambara, H.; Kawai, C.; Torizuka, K.

    1985-06-01

    Dynamic positron computed tomography (PCT) of the heart was performed with a high-sensitivity, whole-body multislice PCT device and (/sup 13/N)ammonia. Serial 15-sec dynamic study immediately after i.v. (/sup 13/N)ammonia injection showed blood pool of the ventricular cavities in the first scan and myocardial images from the third scan in normal cases. In patients with myocardial infarction and mitral valve disease, tracer washout from the lung and myocardial peak time tended to be longer, suggesting presence of pulmonary congestion. PCT delineated tracer retention in the dorsal part of the lung. Serial 5-min late dynamic study in nine cases showed gradual increase in myocardial activity for 30 min in all normal segments and 42% of infarct segments, while less than 13% activity increase was observed in 50% of infarct segments. Thus, serial dynamic PCT with (/sup 13/N)ammonia assessing tracer kinetics in the heart and lung is a valuable adjunct to the static myocardial perfusion imaging for evaluation of various cardiac disorders.

  1. Multimodal sensing-based camera applications

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  2. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  3. Van de Graaff based positron source production

    NASA Astrophysics Data System (ADS)

    Lund, Kasey Roy

    The anti-matter counterpart to the electron, the positron, can be used for a myriad of different scientific research projects to include materials research, energy storage, and deep space flight propulsion. Currently there is a demand for large numbers of positrons to aid in these mentioned research projects. There are different methods of producing and harvesting positrons but all require radioactive sources or large facilities. Positron beams produced by relatively small accelerators are attractive because they are easily shut down, and small accelerators are readily available. A 4MV Van de Graaff accelerator was used to induce the nuclear reaction 12C(d,n)13N in order to produce an intense beam of positrons. 13N is an isotope of nitrogen that decays with a 10 minute half life into 13C, a positron, and an electron neutrino. This radioactive gas is frozen onto a cryogenic freezer where it is then channeled to form an antimatter beam. The beam is then guided using axial magnetic fields into a superconducting magnet with a field strength up to 7 Tesla where it will be stored in a newly designed Micro-Penning-Malmberg trap. Several source geometries have been experimented on and found that a maximum antimatter beam with a positron flux of greater than 0.55x10 6 e+s-1 was achieved. This beam was produced using a solid rare gas moderator composed of krypton. Due to geometric restrictions on this set up, only 0.1-1.0% of the antimatter was being frozen to the desired locations. Simulations and preliminary experiments suggest that a new geometry, currently under testing, will produce a beam of 107 e+s-1 or more.

  4. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  5. A slanting light-guide analog decoding high resolution detector for positron emission tomography camera

    SciTech Connect

    Wong, W.H.; Jing, M.; Bendriem, B.; Hartz, R.; Mullani, N.; Gould, K.L.; Michel, C.

    1987-02-01

    Current high resolution PET cameras require the scintillation crystals to be much narrower than the smallest available photomultipliers. In addition, the large number of photomultiplier channels constitutes the major component cost in the camera. Recent new designs use the Anger camera type of analog decoding method to obtain higher resolution and lower cost by using the relatively large photomultipliers. An alternative approach to improve the resolution and cost factors has been proposed, with a system of slanting light-guides between the scintillators and the photomultipliers. In the Anger camera schemes, the scintillation light is distributed to several neighboring photomultipliers which then determine the scintillation location. In the slanting light-guide design, the scintillation is metered and channeled to only two photomultipliers for the decision making. This paper presents the feasibility and performance achievable with the slanting light-guide detectors. With a crystal/photomultiplier ratio of 6/1, the intrinsic resolution was found to be 4.0 mm using the first non-optimized prototype light-guides on BGO crystals. The axial resolution will be about 5-6 mm.

  6. Formation mechanisms and optimization of trap-based positron beams

    NASA Astrophysics Data System (ADS)

    Natisin, M. R.; Danielson, J. R.; Surko, C. M.

    2016-02-01

    Described here are simulations of pulsed, magnetically guided positron beams formed by ejection from Penning-Malmberg-style traps. In a previous paper [M. R. Natisin et al., Phys. Plasmas 22, 033501 (2015)], simulations were developed and used to describe the operation of an existing trap-based beam system and provided good agreement with experimental measurements. These techniques are used here to study the processes underlying beam formation in more detail and under more general conditions, therefore further optimizing system design. The focus is on low-energy beams (˜eV) with the lowest possible spread in energies (<10 meV), while maintaining microsecond pulse durations. The simulations begin with positrons trapped within a potential well and subsequently ejected by raising the bottom of the trapping well, forcing the particles over an end-gate potential barrier. Under typical conditions, the beam formation process is intrinsically dynamical, with the positron dynamics near the well lip, just before ejection, particularly crucial to setting beam quality. In addition to an investigation of the effects of beam formation on beam quality under typical conditions, two other regimes are discussed; one occurring at low positron temperatures in which significantly lower energy and temporal spreads may be obtained, and a second in cases where the positrons are ejected on time scales significantly faster than the axial bounce time, which results in the ejection process being essentially non-dynamical.

  7. Recent progress in tailoring trap-based positron beams

    SciTech Connect

    Natisin, M. R.; Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2013-03-19

    Recent progress is described to implement two approaches to specially tailor trap-based positron beams. Experiments and simulations are presented to understand the limits on the energy spread and pulse duration of positron beams extracted from a Penning-Malmberg (PM) trap after the particles have been buffer-gas cooled (or heated) in the range of temperatures 1000 {>=} T {>=} 300 K. These simulations are also used to predict beam performance for cryogenically cooled positrons. Experiments and simulations are also presented to understand the properties of beams formed when plasmas are tailored in a PM trap in a 5 tesla magnetic field, then non-adiabatically extracted from the field using a specially designed high-permeability grid to create a new class of electrostatically guided beams.

  8. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-08-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other together with a short-lived one are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 200 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  9. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-12-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other, together with a short-lived one, are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 230 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails, suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  10. 3-D target-based distributed smart camera network localization.

    PubMed

    Kassebaum, John; Bulusu, Nirupama; Feng, Wu-Chi

    2010-10-01

    For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area. PMID:20679031

  11. Conceptual design of an intense positron source based on an LIA

    NASA Astrophysics Data System (ADS)

    Long, Ji-Dong; Yang, Zhen; Dong, Pan; Shi, Jin-Shui

    2012-04-01

    Accelerator based positron sources are widely used due to their high intensity. Most of these accelerators are RF accelerators. An LIA (linear induction accelerator) is a kind of high current pulsed accelerator used for radiography. A conceptual design of an intense pulsed positron source based on an LIA is presented in the paper. One advantage of an LIA is its pulsed power being higher than conventional accelerators, which means a higher amount of primary electrons for positron generations per pulse. Another advantage of an LIA is that it is very suitable to decelerate the positron bunch generated by bremsstrahlung pair process due to its ability to adjustably shape the voltage pulse. By implementing LIA cavities to decelerate the positron bunch before it is moderated, the positron yield could be greatly increased. These features may make the LIA based positron source become a high intensity pulsed positron source.

  12. Infrared camera based on a curved retina.

    PubMed

    Dumas, Delphine; Fendler, Manuel; Berger, Frédéric; Cloix, Baptiste; Pornin, Cyrille; Baier, Nicolas; Druart, Guillaume; Primot, Jérôme; le Coarer, Etienne

    2012-02-15

    Design of miniature and light cameras requires an optical design breakthrough to achieve good optical performance. Solutions inspired by animals' eyes are the most promising. The curvature of the retina offers several advantages, such as uniform intensity and no field curvature, but this feature is not used. The work presented here is a solution to spherically bend monolithic IR detectors. Compared to state-of-the-art methods, a higher fill factor is obtained and the device fabrication process is not modified. We made an IR eye camera with a single lens and a curved IR bolometer. Images captured are well resolved and have good contrast, and the modulation transfer function shows better quality when comparing with planar systems. PMID:22344137

  13. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  14. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  15. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    NASA Astrophysics Data System (ADS)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  16. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  17. Spectral Camera based on Ghost Imaging via Sparsity Constraints.

    PubMed

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  18. Positron annihilation in cardo-based polymer membranes.

    PubMed

    Kobayashi, Y; Kazama, Shingo; Inoue, K; Toyama, T; Nagai, Y; Haraya, K; Mohamed, Hamdy F M; O'Rouke, B E; Oshima, N; Kinomura, A; Suzuki, R

    2014-06-01

    Positron annihilation lifetime spectroscopy (PALS) is applied to a series of bis(aniline)fluorene and bis(xylidine)fluorene-based cardo polyimide and bis(phenol)fluorene-based polysulfone membranes. It was found that favorable amounts of positronium (Ps, the positron-electron bound state) form in cardo polyimides with the 2,2-bis(3,4-dicarboxyphenyl) hexafluoropropane dianhydride (6FDA) moiety and bis(phenol)fluorene-based cardo polysulfone, but no Ps forms in most of the polyimides with pyromellitic dianhydride (PMDA) and 3,3',4,4'-biphenyltetracarboxylic dianhydride (BTDA) moieties. A bis(xylidine)fluorene-based polyimide membrane containing PMDA and BTDA moieties exhibits a little Ps formation but the ortho-positronium (o-Ps, the triplet state of Ps) lifetime of this membrane anomalously shortens with increasing temperature, which we attribute to chemical reaction of o-Ps. Correlation between the hole size (V(h)) deduced from the o-Ps lifetime and diffusion coefficients of O2 and N2 for polyimides with the 6FDA moiety and cardo polysulfone showing favorable Ps formation is discussed based on free volume theory of gas diffusion. It is suggested that o-Ps has a strong tendency to probe larger holes in rigid chain polymers with wide hole size distributions such as those containing cardo moieties, resulting in deviations from the previously reported correlations for common polymers such as polystyrene, polycarbonate, polysulfone, and so forth. PMID:24815092

  19. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  20. Camera array based light field microscopy.

    PubMed

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-09-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  1. Exploring positron characteristics utilizing two new positron-electron correlation schemes based on multiple electronic structure calculation methods

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Gu, Bing-Chuan; Han, Xiao-Xi; Liu, Jian-Dang; Ye, Bang-Jiao

    2015-10-01

    We make a gradient correction to a new local density approximation form of positron-electron correlation. The positron lifetimes and affinities are then probed by using these two approximation forms based on three electronic-structure calculation methods, including the full-potential linearized augmented plane wave (FLAPW) plus local orbitals approach, the atomic superposition (ATSUP) approach, and the projector augmented wave (PAW) approach. The differences between calculated lifetimes using the FLAPW and ATSUP methods are clearly interpreted in the view of positron and electron transfers. We further find that a well-implemented PAW method can give near-perfect agreement on both the positron lifetimes and affinities with the FLAPW method, and the competitiveness of the ATSUP method against the FLAPW/PAW method is reduced within the best calculations. By comparing with the experimental data, the new introduced gradient corrected correlation form is proved to be competitive for positron lifetime and affinity calculations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  2. A new scheme to accumulate positrons in a Penning-Malmberg trap with a linac-based positron pulsed source

    SciTech Connect

    Dupre, P.

    2013-03-19

    The Gravitational Behaviour of Antimatter at Rest experiment (GBAR) is designed to perform a direct measurement of the weak equivalence principle on antimatter by measuring the acceleration of anti-hydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium (Ps) cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. The Ps target will be produced by a pulse of few 10{sup 10} positrons injected onto a positron-positronium converter. For this purpose, a slow positron source using an electron Linac has been constructed at Saclay. The present flux is comparable with that of {sup 22}Na-based sources using solid neon moderator. A new positron accumulation scheme with a Penning-Malmberg trap has been proposed taking advantage of the pulsed time structure of the beam. In the trap, the positrons are cooled by interaction with a dense electron plasma. The overall trapping efficiency has been estimated to be {approx}70% by numerical simulations.

  3. Triangulation-Based Camera Calibration For Machine Vision Systems

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic A.; Celenk, Mehmet

    1990-04-01

    This paper describes a camera calibration procedure for stereo-based machine vision systems. The method is based on geometric triangulation using only a single image of three distinctive points. Both the intrinsic and extrinsic parameters of the system are determined. The procedure is performed only once at the initial set-up using a simple camera model. The effective focal length is extended in such a way that a linear transformation exists between the camera image plane and the output digital image. Only three world points are needed to find the extended focal length and the transformation matrix elements that relates the camera position and orientation to a real world coordinate system. The parameters of the system are computed by solving a set of linear equations. Experimental results show that the method, when used in a stereo system developed in this research, produces reasonably accurate 3-D measurements.

  4. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  5. Camera self-calibration method based on two vanishing points

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Xu, Mengmeng; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Liang, Erjun; Liu, Xiaomin

    2015-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.

  6. Design analysis and performance evaluation of a two-dimensional camera for accelerated positron-emitter beam injection by computer simulation

    SciTech Connect

    Llacer, J.; Chatterjee, A.; Batho, E.K.; Poskanzer, J.A.

    1982-05-01

    The characteristics and design of a high-accuracy and high-sensitivity 2-dimensional camera for the measurement of the end-point of the trajectory of accelerated heavy ion beams of positron emitter isotopes are described. Computer simulation methods have been used in order to insure that the design would meet the demanding criteria of ability to obtain the location of the centroid of a point source in the X-Y plane with errors smaller than 1 mm, with an activity of 100 nanoCi, in a counting time of 5 sec or less. A computer program which can be developed into a general purpose analysis tool for a large number of positron emitter camera configurations is described in its essential parts. The validation of basic simulation results with simple measurements is reported, and the use of the program to generate simulated images which include important second order effects due to detector material, geometry, septa, etc. is demonstrated. Comparison between simulated images and initial results with the completed instrument shows that the desired specifications have been met.

  7. A cooperative control algorithm for camera based observational systems.

    SciTech Connect

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  8. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  9. Optimization of drift bias in an UHV based pulsed positron beam system

    SciTech Connect

    Anto, C. Varghese; Rajaraman, R.; Rao, G. Venugopal; Abhaya, S.; Parimala, J.; Amarendra, G.

    2012-06-05

    We report here the design of ultra high vacuum (UHV) compatible pulsed positron beam lifetime system, which combines the principles of a conventional slow positron beam and RF based pulsing scheme. The mechanical design and construction of the UHV system to house the beam has been completed and it has been tested for a vacuum of {approx} 10{sup -10} mbar. The voltages applied to the drift tube as a function of positron energies have been optimized using SIMION.

  10. Design of microcontroller based system for automation of streak camera

    NASA Astrophysics Data System (ADS)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  11. Design of microcontroller based system for automation of streak camera

    SciTech Connect

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  12. Embedded design based virtual instrument program for positron beam automation

    NASA Astrophysics Data System (ADS)

    Jayapandian, J.; Gururaj, K.; Abhaya, S.; Parimala, J.; Amarendra, G.

    2008-10-01

    Automation of positron beam experiment with a single chip embedded design using a programmable system on chip (PSoC) which provides easy interfacing of the high-voltage DC power supply is reported. Virtual Instrument (VI) control program written in Visual Basic 6.0 ensures the following functions (i) adjusting of sample high voltage by interacting with the programmed PSoC hardware, (ii) control of personal computer (PC) based multi channel analyzer (MCA) card for energy spectroscopy, (iii) analysis of the obtained spectrum to extract the relevant line shape parameters, (iv) plotting of relevant parameters and (v) saving the file in the appropriate format. The present study highlights the hardware features of the PSoC hardware module as well as the control of MCA and other units through programming in Visual Basic.

  13. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  14. Extrinsic Calibration of Camera Networks Based on Pedestrians.

    PubMed

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  15. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  16. A Robust Camera-Based Interface for Mobile Entertainment.

    PubMed

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  17. Image-based camera motion estimation using prior probabilities

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Park, Sun Young; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC

  18. Camera calibration approach based on adaptive active target

    NASA Astrophysics Data System (ADS)

    Zhang, Yalin; Zhou, Fuqiang; Deng, Peng

    2011-12-01

    Aiming at calibrating camera on site, where the lighting condition is hardly controlled and the quality of target images would be declined when the angle between camera and target changes, an adaptive active target is designed and the camera calibration approach based on the target is proposed. The active adaptive target in which LEDs are embedded is flat, providing active feature point. Therefore the brightness of the feature point can be modified via adjusting the electricity, judging from the threshold of image feature criteria. In order to extract features of the image accurately, the concept of subpixel-precise thresholding is also proposed. It converts the discrete representation of the digital image to continuous function by bilinear interpolation, and the sub-pixel contours are acquired by the intersection of the continuous function and the appropriate selection of threshold. According to analysis of the relationship between the features of the image and the brightness of the target, the area ratio of convex hulls and the grey value variance are adopted as the criteria. Result of experiments revealed that the adaptive active target accommodates well to the changing of the illumination in the environment, the camera calibration approach based on adaptive active target can obtain high level of accuracy and fit perfectly for image targeting in various industrial sites.

  19. An Undulator Based Polarized Positron Source for CLIC

    SciTech Connect

    Liu, Wanming; Gai, Wei; Rinolfi, Louis; Sheppard, John; /SLAC

    2012-07-02

    A viable positron source scheme is proposed that uses circularly polarized gamma rays generated from the main 250 GeV electron beam. The beam passes through a helical superconducting undulator with a magnetic field of {approx} 1 Tesla and a period of 1.15 cm. The gamma-rays produced in the undulator in the energy range between {approx} 3 MeV - 100 MeV will be directed to a titanium target and produce polarized positrons. The positrons are then captured, accelerated and transported to a Pre-Damping Ring (PDR). Detailed parameter studies of this scheme including positron yield, and undulator parameter dependence are presented. Effects on the 250 GeV CLIC main beam, including emittance growth and energy loss from the beam passing through the undulator are also discussed.

  20. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  1. EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION FOR MTR CANAL. CAISSONS FLANK EACH SIDE. COUNTERFORT (SUPPORT PERPENDICULAR TO WHAT WILL BE THE LONG WALL OF THE CANAL) RESTS ATOP LEFT CAISSON. IN LOWER PART OF VIEW, DRILLERS PREPARE TRENCHES FOR SUPPORT BEAMS THAT WILL LIE BENEATH CANAL FLOOR. INL NEGATIVE NO. 739. Unknown Photographer, 10/6/1950 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  2. rf streak camera based ultrafast relativistic electron diffraction.

    PubMed

    Musumeci, P; Moody, J T; Scoby, C M; Gutierrez, M S; Tran, T

    2009-01-01

    We theoretically and experimentally investigate the possibility of using a rf streak camera to time resolve in a single shot structural changes at the sub-100 fs time scale via relativistic electron diffraction. We experimentally tested this novel concept at the UCLA Pegasus rf photoinjector. Time-resolved diffraction patterns from thin Al foil are recorded. Averaging over 50 shots is required in order to get statistics sufficient to uncover a variation in time of the diffraction patterns. In the absence of an external pump laser, this is explained as due to the energy chirp on the beam out of the electron gun. With further improvements to the electron source, rf streak camera based ultrafast electron diffraction has the potential to yield truly single shot measurements of ultrafast processes. PMID:19191429

  3. A novel compact gamma camera based on flat panel PMT

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Cinti, M. N.; Trotta, C.; Trotta, G.; Scafè, R.; Betti, M.; Cusanno, F.; Montani, Livia; Iurlaro, Giorgia; Garibaldi, F.; Del Guerra, A.

    2003-11-01

    Over the last ten years the strong technological advances in position sensitive detectors have encouraged the scientific community to develop dedicated imagers for new diagnostic techniques in the field of isotope functional imaging. The main feature of the new detectors is the compactness that allows suitable detection geometry fitting the body anatomy. Position sensitive photomultiplier tubes (PSPMTs) have been showing very good features with continuous improvement. In 1997 a novel gamma camera was proposed based on a closely packed array of second generation 1 in PSPMTs. The main advantage is the potentially unlimited detection area but with the disadvantage of a relatively large non-active area (30%). The Hamamatsu H8500 Flat Panel PMT represents the last generation of PSPMT. Its extreme compactness allows array assembly with an improved effective area up to 97%. This paper, evaluates the potential improvement of imaging performances of a gamma camera based on the new PSPMT, compared with the two previous generation PSPMTs. To this aim the factors affecting the gamma camera final response, like PSPMT gain anode variation and position resolution, are analyzed and related to the uniformity counting response, energy resolution, position linearity, detection efficiency and intrinsic spatial resolution. The results show that uniformity of pulse height response seems to be the main parameter that provides the best imaging performances. Furthermore an extreme identification of pixels seems to be not effective to a full correction of image uniformity counting and gain response. However, considering the present technological limits, Flat Panel PSPMTs could be the best trade off between gamma camera imaging performances, compactness and large detection area.

  4. Observation of Polarized Positrons from an Undulator-Based Source

    SciTech Connect

    Alexander, G; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K.T.; Mikhailichenko, A.A.; Moortgat-Pick, G.A.; Pahl, P.; /Tel Aviv U. /Cornell U., Phys. Dept. /SLAC /Tennessee U. /Humboldt U., Berlin /DESY /Yerevan Phys. Inst. /Aachen, Tech. Hochsch. /DESY, Zeuthen /Princeton U. /Durham U. /Daresbury

    2008-03-06

    An experiment (E166) at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with Geant4 simulations that include the dominant polarization-dependent interactions of electrons, positrons and photons in matter.

  5. Performance of the (n,{gamma})-Based Positron Beam Facility NEPOMUC

    SciTech Connect

    Schreckenbach, K.; Hugenschmidt, C.; Piochacz, C.; Stadlbauer, M.; Loewe, B.; Maier, J.; Pikart, P.

    2009-01-28

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.10{sup 8} e{sup +}/s is achieved (1 keV). The performance and applications of the facility are presented.

  6. Performance of the (n,γ)-Based Positron Beam Facility NEPOMUC

    NASA Astrophysics Data System (ADS)

    Schreckenbach, K.; Hugenschmidt, C.; Löwe, B.; Maier, J.; Pikart, P.; Piochacz, C.; Stadlbauer, M.

    2009-01-01

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.108 e+/s is achieved (1 keV). The performance and applications of the facility are presented.

  7. Camera-based independent couch height verification in radiation oncology.

    PubMed

    Kusters, Martijn; Louwe, Rob; Biemans-van Kastel, Liesbeth; Nieuwenkamp, Henk; Zahradnik, Rien; Claessen, Roy; van Seters, Ronald; Huizenga, Henk

    2015-01-01

    For specific radiation therapy (RT) treatments, it is advantageous to use the isocenter-to-couch distance (ICD) for initial patient setup.(1) Since sagging of the treatment couch is not properly taken into account by the electronic readout of the treatment machine, this readout cannot be used for initial patient positioning using the isocenter-to-couch distance (ICD). Therefore, initial patient positioning to the prescribed ICD has been carried out using a ruler prior to each treatment fraction in our institution. However, the ruler method is laborious and logging of data is not possible. The objective of this study is to replace the ruler-based setup of the couch height with an independent, user-friendly, optical camera-based method whereby the radiation technologists have to move only the couch to the correct couch height, which is visible on a display. A camera-based independent couch height measurement system (ICHS) was developed in cooperation with Panasonic Electric Works Western Europe. Clinical data showed that the ICHS is at least as accurate as the application of a ruler to verify the ICD. The camera-based independent couch height measurement system has been successfully implemented in seven treatment rooms, since 10 September 2012. The benefits of this system are a more streamlined workflow, reduction of human errors during initial patient setup, and logging of the actual couch height at the isocenter. Daily QA shows that the systems are stable and operate within the set 1 mm tolerance. Regular QA of the system is necessary to guarantee that the system works correctly. PMID:26699308

  8. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  9. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  10. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  11. Bioimpedance-based respiratory gating method for oncologic positron emission tomography (PET) imaging with first clinical results

    NASA Astrophysics Data System (ADS)

    Koivumäki, T.; Vauhkonen, M.; Teuho, J.; Teräs, M.; Hakulinen, M. A.

    2013-04-01

    Respiratory motion may cause significant image artefacts in positron emission tomography/computed tomography (PET/CT) imaging. This study introduces a new bioimpedance-based gating method for minimizing respiratory artefacts. The method was studied in 12 oncologic patients by evaluating the following three parameters: maximum metabolic activity of radiopharmaceutical accumulations, the size of these targets as well as their target-to-background ratio. The bioimpedance-gated images were compared with non-gated images and images that were gated with a reference method, chest wall motion monitoring by infrared camera. The bioimpedance method showed clear improvement as increased metabolic activity and decreased target volume compared to non-gated images and produced consistent results with the reference method. Thus, the method may have great potential in the future of respiratory gating in nuclear medicine imaging.

  12. Positron annihilation in neutron irradiated iron-based materials

    NASA Astrophysics Data System (ADS)

    Lambrecht, M.; Almazouzi, A.

    2011-01-01

    The hardening and embrittlement of reactor pressure vessel steels is of great concern in the actual nuclear power plant life assessment. This embrittlement is caused by irradiation-induced damage, like vacancies, interstitials, solutes and their clusters. But the reason for the embrittlement of the material is not yet totally known. The real nature of the irradiation damage should thus be examined as well as its evolution in time. Positron annihilation spectroscopy has been shown to be a powerful method for analyzing some of these defects. In fact, both vacancy type clusters and precipitates can be visualized by positrons. Recently, at SCK·CEN, a new setup has been constructed, calibrated and optimized to measure the coincidence Doppler broadening and lifetime of neutron irradiated materials. To be able to compare the results obtained by the positron studies, with those of other techniques (such as transmission electron microscopy, atom probe tomography and small angle neutron scattering), quantitative estimations of the size and density of the annihilation sites are needed. Using the approach proposed by Vehanen et al., an attempt is made to calculate the needed quantities in Fe and Fe-Cu binary alloys that were neutron irradiated to different doses. The results obtained are discussed highlighting the difficulties in defining the annihilation centres even in these simple model alloys, in spite of using both lifetime and Doppler broadening measurements in the same samples.

  13. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, C.‰line; Gosselin, Bernard

    2005-01-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  14. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, Céline; Gosselin, Bernard

    2004-12-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  15. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  16. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure. PMID:20876019

  17. Whole blood glucose analysis based on smartphone camera module

    NASA Astrophysics Data System (ADS)

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110-586 mg/dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis.

  18. Whole blood glucose analysis based on smartphone camera module.

    PubMed

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110–586 mg∕dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis. PMID:26524683

  19. Improvement of digital photoelasticity based on camera response function.

    PubMed

    Chang, Shih-Hsin; Wu, Hsien-Huang P

    2011-09-20

    Studies on photoelasticity have been conducted by many researchers in recent years, and many equations for photoelastic analysis based on digital images were proposed. While these equations were all presented by the light intensity emitted from the analyzer, pixel values of the digital image were actually used in the real calculations. In this paper, a proposal of using relative light intensity obtained by the camera response function to replace the pixel value for photoelastic analysis was investigated. Generation of isochromatic images based on relative light intensity and pixel value were compared to evaluate the effectiveness of the new approach. The results showed that when relative light intensity was used, the quality of an isochromatic image can be greatly improved both visually and quantitatively. We believe that the technique proposed in this paper can also be used to improve the performance for the other types of photoelastic analysis using digital images. PMID:21947044

  20. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  1. Conceptual design of a slow positron source based on a magnetic trap

    NASA Astrophysics Data System (ADS)

    Volosov, V. I.; Meshkov, O. I.; Mezentsev, N. A.

    2001-09-01

    A unique 10.3 T superconducting wiggler was designed and manufactured at BINP SB RAS. The installation of this wiggler in the SPring-8 storage ring provides a possibility to generate a high-intensity beam of photons (SR) with energy above 1 MeV (Ando et al., J. Synchrotron Radiat. 5 (1998) 360). Conversion of photons to positrons on high- Z material (tungsten) targets creates an integrated positron flux more than 10 13 particles per second. The energy spectrum of the positrons has a maximum at 0.5 MeV and the half-width about 1 MeV (Plokhoi et al., Jpn. J. Appl. Phys. 38 (1999) 604). The traditional methods of positron moderation have the efficiency ɛ= Ns/ Nf of 10 -4 (metallic moderators) to 10 -2 (solid rare gas moderators) (Mills and Gullikson, Appl. Phys. Lett. 49 (1986) 1121). The high flux of primary positrons restricts the choice to a tungsten moderator that has ɛ≈10 -4only (Schultz, Nuc. Instr. and Meth. B 30 (1988) 94). The aim of our project is to obtain the moderation efficiency ɛ⩾10 -1. We propose to moderate the positrons inside a multi-stage magnetic trap based on several (3-6) electromagnetic traps that are connected in series. Magnetic field of the traps grows consecutively from stage to stage. We propose to release the positrons from the converter with the use of an additional relativistic electron beam passing in synchronism with the SR pulse in the vicinity of the converter. The average electron beam energy and current are 1-2 MeV and 100 mA, respectively. The electrical field of the beam is high enough to distort the positron paths by an amount comparable with the Larmor radius. The further drift of the positrons to the trap axis will occur due to the strengthening of the magnetic field. The magnetic field amplitude of adjacent traps varies in time in the antiphase and increases from 0.9 T in the first stage to 6 T in the last one. The positron transition from stage to stage takes place at the moment of the field equalization. The removal

  2. Formation of buffer-gas-trap based positron beams

    SciTech Connect

    Natisin, M. R. Danielson, J. R. Surko, C. M.

    2015-03-15

    Presented here are experimental measurements, analytic expressions, and simulation results for pulsed, magnetically guided positron beams formed using a Penning-Malmberg style buffer gas trap. In the relevant limit, particle motion can be separated into motion along the magnetic field and gyro-motion in the plane perpendicular to the field. Analytic expressions are developed which describe the evolution of the beam energy distributions, both parallel and perpendicular to the magnetic field, as the beam propagates through regions of varying magnetic field. Simulations of the beam formation process are presented, with the parameters chosen to accurately replicate experimental conditions. The initial conditions and ejection parameters are varied systematically in both experiment and simulation, allowing the relevant processes involved in beam formation to be explored. These studies provide new insights into the underlying physics, including significant adiabatic cooling, due to the time-dependent beam-formation potential. Methods to improve the beam energy and temporal resolution are discussed.

  3. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    NASA Astrophysics Data System (ADS)

    Muranaka, T.; Debu, P.; Dupré, P.; Liszkay, L.; Mansoulie, B.; Pérez, P.; Rey, J. M.; Ruiz, N.; Sacquin, Y.; Crivelli, P.; Gendotti, U.; Rubbia, A.

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5·1011 per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  4. A trap-based pulsed positron beam optimised for positronium laser spectroscopy.

    PubMed

    Cooper, B S; Alonso, A M; Deller, A; Wall, T E; Cassidy, D B

    2015-10-01

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10(5) positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10(7) cm(-3) is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields. PMID:26520934

  5. A trap-based pulsed positron beam optimised for positronium laser spectroscopy

    SciTech Connect

    Cooper, B. S. Alonso, A. M.; Deller, A.; Wall, T. E.; Cassidy, D. B.

    2015-10-15

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10{sup 5} positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10{sup 7} cm{sup −3} is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields.

  6. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  7. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  8. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly. PMID:25710087

  9. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  10. The System Design, Engineering Architecture, and Preliminary Results of a Lower-Cost High-Sensitivity High-Resolution Positron Emission Mammography Camera.

    PubMed

    Zhang, Yuxuan; Ramirez, Rocio A; Li, Hongdi; Liu, Shitao; An, Shaohui; Wang, Chao; Baghaei, Hossain; Wong, Wai-Hoi

    2010-02-01

    A lower-cost high-sensitivity high-resolution positron emission mammography (PEM) camera is developed. It consists of two detector modules with the planar detector bank of 20 × 12 cm(2). Each bank has 60 low-cost PMT-Quadrant-Sharing (PQS) LYSO blocks arranged in a 10 × 6 array with two types of geometries. One is the symmetric 19.36 × 19.36 mm(2) block made of 1.5 × 1.5 × 10 mm(3) crystals in a 12 × 12 array. The other is the 19.36 × 26.05 mm(2) asymmetric block made of 1.5 × 1.9 × 10 mm(3) crystals in 12 × 13 array. One row (10) of the elongated blocks are used along one side of the bank to reclaim the half empty PMT photocathode in the regular PQS design to reduce the dead area at the edge of the module. The bank has a high overall crystal packing fraction of 88%, which results in a very high sensitivity. Mechanical design and electronics have been developed for low-cost, compactness, and stability purposes. Each module has four Anger-HYPER decoding electronics that can handle a count-rate of 3 Mcps for single events. A simple two-module coincidence board with a hardware delay window for random coincidences has been developed with an adjustable window of 6 to 15 ns. Some of the performance parameters have been studied by preliminary tests and Monte Carlo simulations, including the crystal decoding map and the 17% energy resolution of the detectors, the point source sensitivity of 11.5% with 50 mm bank-to-bank distance, the 1.2 mm-spatial resolutions, 42 kcps peak Noise Equivalent Count Rate at 7.0-mCi total activity in human body, and the resolution phantom images. Those results show that the design goal of building a lower-cost, high-sensitivity, high-resolution PEM detector is achieved. PMID:20485539

  11. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  12. A bionic camera-based polarization navigation sensor.

    PubMed

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  13. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  14. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  15. Positron source investigation by using CLIC drive beam for Linac-LHC based e+p collider

    NASA Astrophysics Data System (ADS)

    Arιkan, Ertan; Aksakal, Hüsnü

    2012-08-01

    Three different methods which are alternately conventional, Compton backscattering and Undulator based methods employed for the production of positrons. The positrons to be used for e+p collisions in a Linac-LHC (Large Hadron Collider) based collider have been studied. The number of produced positrons as a function of drive beam energy and optimum target thickness has been determined. Three different targets have been used as a source investigation which are W75-Ir25, W75-Ta25, and W75-Re25 for three methods. Estimated number of the positrons has been performed with FLUKA simulation code. Then, these produced positrons are used for following Adiabatic matching device (AMD) and capture efficiency is determined. Then e+p collider luminosity corresponding to the methods mentioned above have been calculated by CAIN code.

  16. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  17. Positron emission tomography displacement sensitivity: predicting binding potential change for positron emission tomography tracers based on their kinetic characteristics.

    PubMed

    Morris, Evan D; Yoder, Karmen K

    2007-03-01

    There is great interest in positron emission tomography (PET) as a noninvasive assay of fluctuations in synaptic neurotransmitter levels, but questions remain regarding the optimal choice of tracer for such a task. A mathematical method is proposed for predicting the utility of any PET tracer as a detector of changes in the concentration of an endogenous competitor via displacement of the tracer (a.k.a., its 'vulnerability' to competition). The method is based on earlier theoretical work by Endres and Carson and by the authors. A tracer-specific predictor, the PET Displacement Sensitivity (PDS), is calculated from compartmental model simulations of the uptake and retention of dopaminergic radiotracers in the presence of transient elevations of dopamine (DA). The PDS predicts the change in binding potential (DeltaBP) for a given change in receptor occupancy because of binding by the endogenous competitor. Simulations were performed using estimates of tracer kinetic parameters derived from the literature. For D(2)/D(3) tracers, the calculated PDS indices suggest a rank order for sensitivity to displacement by DA as follows: raclopride (highest sensitivity), followed by fallypride, FESP, FLB, NMSP, and epidepride (lowest). Although the PDS takes into account the affinity constant for the tracer at the binding site, its predictive value cannot be matched by either a single equilibrium constant, or by any one rate constant of the model. Values for DeltaBP have been derived from published studies that employed comparable displacement paradigms with amphetamine and a D(2)/D(3) tracer. The values are in good agreement with the PDS-predicted rank order of sensitivity to displacement. PMID:16788713

  18. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  19. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  20. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  1. A new depth measuring method for stereo camera based on converted relative extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Song, Xiaowei; Yang, Lei; Wu, Yuanzhao; Liu, Zhong

    2013-08-01

    This paper presents a new depth measuring method for the dual-view stereo camera based on the converted relative extrinsic parameters. The relative extrinsic parameters between left and right cameras, which obtained by the stereo camera calibration, can indicate the geometric relationships among the left principle point, right principle point and convergent point. Furthermore, the geometry which consists of the corresponding points and the object can be obtained by making conversion between the corresponding points and principle points. Therefore, the depth of the object can be calculated based on the obtained geometry. The correctness of the proposed method has been proved in 3ds Max, and the validity of the method has been verified on the binocular stereo system of flea2 cameras. We compared our experimental results with the popular RGB-D camera (e.g. Kinect). The comparison results show that our method is reliable and efficient, without epipolar rectification.

  2. Unified Camera Tamper Detection Based on Edge and Object Information

    PubMed Central

    Lee, Gil-beom; Lee, Myeong-jin; Lim, Jongtae

    2015-01-01

    In this paper, a novel camera tamper detection algorithm is proposed to detect three types of tamper attacks: covered, moved and defocused. The edge disappearance rate is defined in order to measure the amount of edge pixels that disappear in the current frame from the background frame while excluding edges in the foreground. Tamper attacks are detected if the difference between the edge disappearance rate and its temporal average is larger than an adaptive threshold reflecting the environmental conditions of the cameras. The performance of the proposed algorithm is evaluated for short video sequences with three types of tamper attacks and for 24-h video sequences without tamper attacks; the algorithm is shown to achieve acceptable levels of detection and false alarm rates for all types of tamper attacks in real environments. PMID:25946628

  3. Automatic camera calibration method based on dashed lines

    NASA Astrophysics Data System (ADS)

    Li, Xiuhua; Wang, Guoyou; Liu, Jianguo

    2013-10-01

    We present a new method for full-automatic calibration of traffic cameras using the end points on dashed lines. Our approach uses the improved RANSAC method with the help of pixels transverse projection to detect the dashed lines and end points on them. Then combining analysis of the geometric relationship between the camera and road coordinate systems, we construct a road model to fit the end points. Finally using two-dimension calibration method we can convert pixels in image to meters along the ground truth lane. On a large number of experiments exhibiting a variety of conditions, our approach performs well, achieving less than 5% error in measuring test lengths in all cases.

  4. Calibration Methods for a 3D Triangulation Based Camera

    NASA Astrophysics Data System (ADS)

    Schulz, Ulrike; Böhnke, Kay

    A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.

  5. Inspection focus technology of space tridimensional mapping camera based on astigmatic method

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Zhang, Liping

    2010-10-01

    The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 μm.

  6. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  7. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  8. An algorithm for computing extrinsic camera parameters for far-range photogrammetry based on essential matrix

    NASA Astrophysics Data System (ADS)

    Cai, Huimin; Li, Kejie; Liu, Meilian

    2010-11-01

    Far-range photogrammetry is widely used in the location determination problem in some dangerous situation. In this paper we discussed the camera calibration problem which can be used in outdoors. Location determination based on stereo vision sensors requires the knowledge of the camera parameters, such as camera position, orientation, lens distortion, focal length etc. with high precision. Most of the existed method of camera calibration is placing many land markers whose position is known accurately. But due to large distance and other practical problems we can not place the land markers with high precision. This paper shows that if we don't know the position of the land marker, we also can get the extrinsic camera parameters with essential matrix. The real parameters of the camera and the computed parameters of the camera give rise to the geometric error. We develop and present theoretical analysis of the geometric error and how to get the extrinsic camera parameters with high precision in large scale measurement. Experimental results of the project which is used to measure the drop point of a high speed object testify the method we proposed with high precision compared with traditional methods.

  9. Design of an infrared camera based aircraft detection system for laser guide star installations

    SciTech Connect

    Friedman, H.; Macintosh, B.

    1996-03-05

    There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.

  10. Study of a Coincidence Detector Using a Suspension of Superheated Superconducting Grains in a High Density Dielectric Matrix for Positron Emission Tomography and γ-γ Tagging

    NASA Astrophysics Data System (ADS)

    Bruère Dawson, R.; Maillard, J.; Maurel, G.; Parisi, J.; Silva, J.; Waysand, G.

    2006-01-01

    We demonstrate the feasibility of coincidence detectors based on superheated superconducting grains (SSG) in a high density dielectric matrix (HDDM) for two applications: 1) positron cameras for small animal imaging, where two diametrically opposite cells are simultaneously hit by 511 keV gammas; 2) tagging of γ-γ events in electron positron colliders.

  11. Microstructure Evaluation of Fe-BASED Amorphous Alloys Investigated by Doppler Broadening Positron Annihilation Technique

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Huang, Ping; Wang, Yuxin; Yan, Biao

    2013-07-01

    Microstructure of Fe-based amorphous and nanocrystalline soft magnetic alloy has been investigated by X-ray diffraction (XRD), transmission electronic microscopy (TEM) and Doppler broadening positron annihilation technique (PAT). Doppler broadening measurement reveals that amorphous alloys (Finemet, Type I) which can form a nanocrystalline phase have more defects (free volume) than alloys (Metglas, Type II) which cannot form this microstructure. XRD and TEM characterization indicates that the nanocrystallization of amorphous Finemet alloy occurs at 460°C, where nanocrystallites of α-Fe with an average grain size of a few nanometers are formed in an amorphous matrix. With increasing annealing temperature up to 500°C, the average grain size increases up to around 12 nm. During the annealing of Finemet alloy, it has been demonstrated that positron annihilates in quenched-in defect, crystalline nanophase and amorphous-nanocrystalline interfaces. The change of line shape parameter S with annealing temperature in Finemet alloy is mainly due to the structural relaxation, the pre-nucleation of Cu nucleus and the nanocrystallization of α-Fe(Si) phase during annealing. This study throws new insights into positron behavior in the nanocrystallization of metallic glasses, especially in the presence of single or multiple nanophases embedded in the amorphous matrix.

  12. Characterization of the CCD and CMOS cameras for grating-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Lytaev, Pavel; Hipp, Alexander; Lottermoser, Lars; Herzen, Julia; Greving, Imke; Khokhriakov, Igor; Meyer-Loges, Stephan; Plewka, Jörn; Burmester, Jörg; Caselle, Michele; Vogelgesang, Matthias; Chilingaryan, Suren; Kopmann, Andreas; Balzer, Matthias; Schreyer, Andreas; Beckmann, Felix

    2014-09-01

    In this article we present the quantitative characterization of CCD and CMOS sensors which are used at the experiments for microtomography operated by HZG at PETRA III at DESY in Hamburg, Germany. A standard commercial CCD camera is compared to a camera based on a CMOS sensor. This CMOS camera is modified for grating-based differential phase-contrast tomography. The main goal of the project is to quantify and to optimize the statistical parameters of this camera system. These key performance parameters such as readout noise, conversion gain and full-well capacity are used to define an optimized measurement for grating-based phase-contrast. First results will be shown.

  13. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  14. Camera-based curvature measurement of a large incandescent object

    NASA Astrophysics Data System (ADS)

    Ollikkala, Arttu V. H.; Kananen, Timo P.; Mäkynen, Anssi J.; Holappa, Markus

    2013-04-01

    The goal of this work was to implement a low-cost machine vision system to help the roller operator to estimate the amount of strip camber during the rolling process. The machine vision system composing of a single camera, a standard PC-computer and a LabVIEW written program using straightforward image analysis determines the magnitude and direction of camber and presents the results both in numerical and graphical form on the computer screen. The system was calibrated with LED set-up which was also used to validate the accuracy of the system by mimicking the strip curvatures. The validation showed that the maximum difference between the true and measured values was less than +/-4 mm (k=0.95) within the 22 meter long test pattern.

  15. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  16. A descriptive geometry based method for total and common cameras fields of view optimization

    NASA Astrophysics Data System (ADS)

    Salmane, H.; Ruichek, Y.; Khoudour, L.

    2011-07-01

    The presented work is conducted in the framework of the ANR-VTT PANsafer project (Towards a safer level crossing). One of the objectives of the project is to develop a video surveillance system that will be able to detect and recognize potential dangerous situation around level crossings. This paper addresses the problem of cameras positioning and orientation in order to view optimally monitored scenes. In general, adjusting cameras position and orientation is achieved experimentally and empirically by considering geometrical different configurations. This step requires a lot of time to adjust approximately the total and common fields of view of the cameras, especially when constrained environments, like level crossing environments, are considered. In order to simplify this task and to get more precise cameras positioning and orientation, we propose in this paper a method that optimizes automatically the total and common cameras fields with respect to the desired scene. Based on descriptive geometry, the method estimates the best cameras position and orientation by optimizing surfaces of 2D domains that are obtained by projecting/intersecting the field of view of each camera on/with horizontal and vertical planes. The proposed method is evaluated and tested to demonstrate its effectiveness.

  17. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  18. Metric Potential of a 3D Measurement System Based on Digital Compact Cameras

    PubMed Central

    Sanz-Ablanedo, Enoc; Rodríguez-Pérez, José Ramón; Arias-Sánchez, Pedro; Armesto, Julia

    2009-01-01

    This paper presents an optical measuring system based on low cost, high resolution digital cameras. Once the cameras are synchronised, the portable and adjustable system can be used to observe living beings, bodies in motion, or deformations of very different sizes. Each of the cameras has been modelled individually and studied with regard to the photogrammetric potential of the system. We have investigated the photogrammetric precision obtained from the crossing of rays, the repeatability of results, and the accuracy of the coordinates obtained. Systematic and random errors are identified in validity assessment of the definition of the precision of the system from crossing of rays or from marking residuals in images. The results have clearly demonstrated the capability of a low-cost multiple-camera system to measure with sub-millimetre precision. PMID:22408520

  19. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  20. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    NASA Astrophysics Data System (ADS)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  1. A Compton camera application for the GAMOS GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Arce, P.; Judson, D. S.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Dormand, J.; Jones, M.; Nolan, P. J.; Sampson, J. A.; Scraggs, D. P.; Sweeney, A.; Lazarus, I.; Simpson, J.

    2012-04-01

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  2. Multi-camera calibration based on openCV and multi-view registration

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-ming; Wan, Xiong; Zhang, Zhi-min; Leng, Bi-yan; Lou, Ning-ning; He, Shuai

    2010-10-01

    For multi-camera calibration systems, a method based on OpenCV and multi-view registration combining calibration algorithm is proposed. First of all, using a Zhang's calibration plate (8X8 chessboard diagram) and a number of cameras (with three industrial-grade CCD) to be 9 group images shooting from different angles, using OpenCV to calibrate the parameters fast in the camera. Secondly, based on the corresponding relationship between each camera view, the computation of the rotation matrix and translation matrix is formulated as a constrained optimization problem. According to the Kuhn-Tucker theorem and the properties on the derivative of the matrix-valued function, the formulae of rotation matrix and translation matrix are deduced by using singular value decomposition algorithm. Afterwards an iterative method is utilized to get the entire coordinate transformation of pair-wise views, thus the precise multi-view registration can be conveniently achieved and then can get the relative positions in them(the camera outside the parameters).Experimental results show that the method is practical in multi-camera calibration .

  3. Positron emission tomography.

    PubMed

    Hoffman, E J; Phelps, M E

    1979-01-01

    Conventional nuclear imaging techniques utilizing lead collimation rely on radioactive tracers with little role in human physiology. The principles of imaging based on coincidence detection of the annihilation radiation produced in positron decay indicate that this mode of detection is uniquely suited for use in emission computed tomography. The only gamma-ray-emitting isotopes of carbon, nitrogen, and oxygen are positron emitters, which yield energies too high for conventional imaging techniques. Thus development of positron emitters in nuclear medicine imaging would make possible the use of a new class of physiologically active, positron-emitting radiopharmaceuticals. The application of these principles is described in the use of a physiologically active compound labeled with a positron emitter and positron-emission computed tomography to measure the local cerebral metabolic rate in humans. PMID:440173

  4. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  5. SIFT-Based Indoor Localization for Older Adults Using Wearable Camera

    PubMed Central

    Zhang, Boxue; Zhao, Qi; Feng, Wenquan; Sun, Mingui; Jia, Wenyan

    2015-01-01

    This paper presents an image-based indoor localization system for tracking older individuals’ movement at home. In this system, images are acquired at a low frame rate by a miniature camera worn conveniently at the chest position. The correspondence between adjacent frames is first established by matching the SIFT (scale-invariant feature transform) based key points in a pair of images. The location changes of these points are then used to estimate the position of the wearer based on use of the pinhole camera model. A preliminary study conducted in an indoor environment indicates that the location of the wearer can be estimated with an adequate accuracy. PMID:26190909

  6. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels. PMID:27534480

  7. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  8. Positron microscopy

    SciTech Connect

    Hulett, L.D. Jr.; Xu, J.

    1995-02-01

    The negative work function property that some materials have for positrons make possible the development of positron reemission microscopy (PRM). Because of the low energies with which the positrons are emitted, some unique applications, such as the imaging of defects, can be made. The history of the concept of PRM, and its present state of development will be reviewed. The potential of positron microprobe techniques will be discussed also.

  9. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs

    NASA Astrophysics Data System (ADS)

    De Lorenzo, G.; Chmeissani, M.; Uzun, D.; Kolstein, M.; Ozsahin, I.; Mikhaylova, E.; Arce, P.; Cañadas, M.; Ariño, G.; Calderón, Y.

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events.

  10. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  11. Submap joining smoothing and mapping for camera-based indoor localization and mapping

    NASA Astrophysics Data System (ADS)

    Bjärkefur, J.; Karlsson, A.; Grönwall, C.; Rydell, J.

    2011-06-01

    Personnel positioning is important for safety in e.g. emergency response operations. In GPS-denied environments, possible positioning solutions include systems based on radio frequency communication, inertial sensors, and cameras. Many camera-based systems create a map and localize themselves relative to that. The computational complexity of most such solutions grows rapidly with the size of the map. One way to reduce the complexity is to divide the visited region into submaps. This paper presents a novel method for merging conditionally independent submaps (generated using e.g. EKF-SLAM) by the use of smoothing. Using this approach it is possible to build large maps in close to linear time. The method is demonstrated in two indoor scenarios, where data was collected with a trolley-mounted stereo vision camera.

  12. Narrow Field-Of Visual Odometry Based on a Focused Plenoptic Camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2015-03-01

    In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic camera. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Furthermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image.

  13. Linking the near-surface camera-based phenological metrics with leaf chemical and spectroscopic properties

    NASA Astrophysics Data System (ADS)

    Yang, X.; Tang, J.; Mustard, J. F.; Schmitt, J.

    2012-12-01

    Plant phenology is an important indicator of climate change. Near-surface cameras provide a way to continuously monitor plant canopy development at the scale of several hundred meters, which is rarely feasible by either traditional phenological monitoring methods or remote sensing. Thus, digital cameras are being deployed in national networks such as the National Ecological Observatory Network (NEON) and PhenoCam. However, it is unclear how the camera-based phenological metrics are linked with plant physiology as measured from leaf chemical and spectroscopic properties throughout the growing season. We used the temporal trajectories of leaf chemical properties (chlorophyll a and b, carotenoids, leaf water content, leaf carbon/nitrogen content) and leaf reflectance/transmittance (300 to 2500 nm) to understand the temporal changes of camera-based phenological metrics (e.g., relative greenness), which was acquired from our Standalone Phenological Observation System installed on a tower on the island of Martha's Vineyard, MA (dominant species: Quercus alba). Leaf chemical and spectroscopic properties of three oak trees near the tower were measured weekly from June to November, 2011. We found that the chlorophyll concentration showed similar temporal trajectories to the relative greenness. However, the change of chlorophyll concentration lagged behind the change of relative greenness for about 20 days both in the spring and the fall. The relative redness is a better indicator of leaf senescence in the fall than the relative greenness. We derived relative greenness from leaf spectroscopy and found that the relative greenness from camera matched well with that from the spectroscopy in the mid-summer, but this relationship faded as leaves start to fall, exposing the branches and soil background. This work suggests that we should be cautious to interpret camera-based phenological metrics, and the relative redness could potentially be a useful indicator of fall senescence.

  14. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features

    PubMed Central

    Wu, Xiaolong; Wu, Sentang; Xing, Zhihui; Jia, Xiang

    2016-01-01

    This paper presents a global calibration method for widely distributed vision sensors in ring-topologies. Planar target with two mutually orthogonal groups of parallel lines is needed for each camera. Firstly, the relative pose of each camera and its corresponding target is found from the vanishing points and lines. Next, an auxiliary camera is used to find the relative poses between neighboring pairs of calibration targets. Then the relative pose from each target to the reference target is initialized by the chain of transformations, followed by nonlinear optimization based on the constraint of ring-topologies. Lastly, the relative poses between the cameras are found from the relative poses of calibration targets. Synthetic data, simulation images and real experiments all demonstrate that the proposed method is reliable and accurate. The accumulated error due to multiple coordinate transformations can be adjusted effectively by the proposed method. In real experiment, eight targets are located in an area about 1200 mm × 1200 mm. The accuracy of the proposed method is about 0.465 mm when the times of coordinate transformations reach a maximum. The proposed method is simple and can be applied to different camera configurations. PMID:27338386

  15. Broadband Sub-terahertz Camera Based on Photothermal Conversion and IR Thermography

    NASA Astrophysics Data System (ADS)

    Romano, M.; Chulkov, A.; Sommier, A.; Balageas, D.; Vavilov, V.; Batsale, J. C.; Pradere, C.

    2016-05-01

    This paper describes a fast sub-terahertz (THz) camera that is based on the use of a quantum infrared camera coupled with a photothermal converter, called a THz-to-Thermal Converter (TTC), thus allowing fast image acquisition. The performance of the experimental setup is presented and discussed, with an emphasis on the advantages of the proposed method for decreasing noise in raw data and increasing the image acquisition rate. A detectivity of 160 pW Hz-0.5 per pixel has been achieved, and some examples of the practical implementation of sub-THz imaging are given.

  16. Shape Function-Based Estimation of Deformation with Moving Cameras Attached to the Deforming Body

    NASA Astrophysics Data System (ADS)

    Jokinen, O.; Ranta, I.; Haggrén, H.; Rönnholm, P.

    2016-06-01

    The paper presents a novel method to measure 3-D deformation of a large metallic frame structure of a crane under loading from one to several images, when the cameras need to be attached to the self deforming body, the structure sways during loading, and the imaging geometry is not optimal due to physical limitations. The solution is based on modeling the deformation with adequate shape functions and taking into account that the cameras move depending on the frame deformation. It is shown that the deformation can be estimated even from a single image of targeted points if the 3-D coordinates of the points are known or have been measured before loading using multiple cameras or some other measuring technique. The precision of the method is evaluated to be 1 mm at best, corresponding to 1:11400 of the average distance to the target.

  17. Mach-zehnder based optical marker/comb generator for streak camera calibration

    SciTech Connect

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  18. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction. PMID:27607253

  19. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  20. Efficient intensity-based camera pose estimation in presence of depth

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Nestares, Oscar; Wu, Yi; Kozintsev, Igor; Haussecker, Horst

    2013-03-01

    The widespread success of Kinect enables users to acquire both image and depth information with satisfying accuracy at relatively low cost. We leverage the Kinect output to efficiently and accurately estimate the camera pose in presence of rotation, translation, or both. The applications of our algorithm are vast ranging from camera tracking, to 3D points clouds registration, and video stabilization. The state-of-the-art approach uses point correspondences for estimating the pose. More explicitly, it extracts point features from images, e.g., SURF or SIFT, and builds their descriptors, and matches features from different images to obtain point correspondences. However, while features-based approaches are widely used, they perform poorly in scenes lacking texture due to scarcity of features or in scenes with repetitive structure due to false correspondences. Our algorithm is intensity-based and requires neither point features' extraction, nor descriptors' generation/matching. Due to absence of depth, the intensity-based approach alone cannot handle camera translation. With Kinect capturing both image and depth frames, we extend the intensity-based algorithm to estimate the camera pose in case of both 3D rotation and translation. The results are quite promising.

  1. Trap-Based Beam Formation Mechanisms and the Development of an Ultra-High-Energy-Resolution Cryogenic Positron Beam

    NASA Astrophysics Data System (ADS)

    Natisin, Michael Ryan

    The focus of this dissertation is the development of a positron beam with significantly improved energy resolution over any beam resolution previously available. While positron interactions with matter are important in a variety of contexts, the range of experimental data available regarding fundamental positron-matter interactions is severely limited as compared to analogous electron-matter processes. This difference is due largely to the difficulties encountered in creating positron beams with narrow energy spreads. Described here is a detailed investigation into the physical processes operative during positron cooling and beam formation in state-of-the-art, trap-based beam systems. These beams rely on buffer gas traps (BGTs), in which positrons are trapped and cooled to the ambient temperature (300 K) through interactions with a molecular gas, and subsequently ejected as a high resolution pulsed beam. Experimental measurements, analytic models, and simulation results are used to understand the creation and characterization of these beams, with a focus on the mechanisms responsible for setting beam energy resolution. The information gained from these experimental and theoretical studies was then used to design, construct, and operate a next-generation high-energy-resolution beam system. In this new system, the pulsed beam from the BGT is magnetically guided into a new apparatus which re-traps the positrons, cools them to 50 K, and re-emits them as a pulsed beam with superior beam characteristics. Using these techniques, positron beams with total energy spreads as low as 6.9 meV FWHM are produced. This represents a factor of ˜ 5 improvement over the previous state-of-the-art, making it the largest increase in positron beam energy resolution since the development of advanced moderator techniques in the early 1980's. These beams also have temporal spreads of 0.9 mus FWHM and radial spreads of 1 mm FWHM. This represents improvements by factors of ˜2 and 10

  2. A Kinect™ camera based navigation system for percutaneous abdominal puncture

    NASA Astrophysics Data System (ADS)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable

  3. Positron microprobe at LLNL

    SciTech Connect

    Asoka, P; Howell, R; Stoeffl, W

    1998-11-01

    The electron linac based positron source at Lawrence Livermore National Laboratory (LLNL) provides the world's highest current beam of keV positrons. We are building a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with sub-micron resolution. The widely spaced and intense positron packets from the tungsten moderator at the end of the 100 MeV LLNL linac are captured and trapped in a magnetic bottle. The positrons are then released in 1 ns bunches at a 20 MHz repetition rate. With a three-stage re-moderation we will compress the cm-sized original beam to a 1 micro-meter diameter final spot on the target. The buncher will compress the arrival time of positrons on the target to less than 100 ps. A detector array with up to 60 BaF2 crystals in paired coincidence will measure the annihilation radiation with high efficiency and low background. The energy of the positrons can be varied from less than 1 keV up to 50 keV.

  4. Intense positron beam at KEK

    NASA Astrophysics Data System (ADS)

    Kurihara, Toshikazu; Yagishita, Akira; Enomoto, Atsushi; Kobayashi, Hitoshi; Shidara, Tetsuo; Shirakawa, Akihiro; Nakahara, Kazuo; Saitou, Haruo; Inoue, Kouji; Nagashima, Yasuyuki; Hyodo, Toshio; Nagai, Yasuyoshi; Hasegawa, Masayuki; Inoue, Yoshi; Kogure, Yoshiaki; Doyama, Masao

    2000-08-01

    A positron beam is a useful probe for investigating the electronic states in solids, especially concerning the surface states. The advantage of utilizing positron beams is in their simpler interactions with matter, owing to the absence of any exchange forces, in contrast to the case of low-energy electrons. However, such studies as low-energy positron diffraction, positron microscopy and positronium (Ps) spectroscopy, which require high intensity slow-positron beams, are very limited due to the poor intensity obtained from a conventional radioactive-isotope-based positron source. In conventional laboratories, the slow-positron intensity is restricted to 10 6 e +/s due to the strength of the available radioactive source. An accelerator based slow-positron source is a good candidate for increasing the slow-positron intensity. One of the results using a high intensity pulsed positron beam is presented as a study of the origins of a Ps emitted from SiO 2. We also describe the two-dimensional angular correlation of annihilation radiation (2D-ACAR) measurement system with slow-positron beams and a positron microscope.

  5. Ultrashort megaelectronvolt positron beam generation based on laser-accelerated electrons

    NASA Astrophysics Data System (ADS)

    Xu, Tongjun; Shen, Baifei; Xu, Jiancai; Li, Shun; Yu, Yong; Li, Jinfeng; Lu, Xiaoming; Wang, Cheng; Wang, Xinliang; Liang, Xiaoyan; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan

    2016-03-01

    Experimental generation of ultrashort MeV positron beams with high intensity and high density using a compact laser-driven setup is reported. A high-density gas jet is employed experimentally to generate MeV electrons with high charge; thus, a charge-neutralized MeV positron beam with high density is obtained during laser-accelerated electrons irradiating high-Z solid targets. It is a novel electron-positron source for the study of laboratory astrophysics. Meanwhile, the MeV positron beam is pulsed with an ultrashort duration of tens of femtoseconds and has a high peak intensity of 7.8 × 1021 s-1, thus allows specific studies of fast kinetics in millimeter-thick materials with a high time resolution and exhibits potential for applications in positron annihilation spectroscopy.

  6. Skyline matching based camera orientation from images and mobile mapping point clouds

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Eggert, D.; Brenner, C.

    2014-05-01

    Mobile Mapping is widely used for collecting large amounts of geo-referenced data. An important role plays sensor fusion, in order to evaluate multiple sensors such as laser scanner and cameras jointly. This requires to determine the relative orientation between sensors. Based on data of a RIEGL VMX-250 mobile mapping system equipped with two laser scanners, four optional cameras, and a highly precise GNSS/IMU system, we propose an approach to improve camera orientations. A manually determined orientation is used as an initial approximation for matching a large number of points in optical images and the corresponding projected scan images. The search space of the point correspondences is reduced to skylines found in both the optical as well as the scan image. The skyline determination is based on alpha shapes, the actual matching is done via an adapted ICP algorithm. The approximate values of the relative orientation are used as starting values for an iterative resection process. Outliers are removed at several stages of the process. Our approach is fully automatic and improves the camera orientation significantly.

  7. Camera-based noncontact metrology for static/dynamic testing of flexible multibody systems

    NASA Astrophysics Data System (ADS)

    Pai, P. Frank; Ramanathan, Suresh; Hu, Jiazhu; Chernova, DarYa K.; Qian, Xin; Wu, Genyong

    2010-08-01

    Presented here is a camera-based noncontact measurement theory for static/dynamic testing of flexible multibody systems that undergo large rigid, elastic and/or plastic deformations. The procedure and equations for accurate estimation of system parameters (i.e. the location and focal length of each camera and the transformation matrix relating its image and object coordinate systems) using an L-frame with four retroreflective markers are described in detail. Moreover, a method for refinement of estimated system parameters and establishment of a lens distortion model for correcting optical distortions using a T-wand with three markers is described. Dynamically deformed geometries of a multibody system are assumed to be obtained by tracing the three-dimensional instantaneous coordinates of markers adhered to the system's outside surfaces, and cameras and triangulation techniques are used for capturing marker images and identifying markers' coordinates. Furthermore, an EAGLE-500 motion analysis system is used to demonstrate measurements of static/dynamic deformations of six different flexible multibody systems. All numerical simulations and experimental results show that the use of camera-based motion analysis systems is feasible and accurate enough for static/dynamic experiments on flexible multibody systems, especially those that cannot be measured using conventional contact sensors.

  8. Towards direct reconstruction from a gamma camera based on compton scattering

    SciTech Connect

    Cree, M.J.; Bones, P.J. . Dept. of Electrical and Electronic Engineering)

    1994-06-01

    The Compton scattering camera (sometimes called the electronically collimated camera) has been shown by others to have the potential to better the photon counting statistics and the energy resolution of the Anger camera for imaging in SPECT. By using coincident detection of Compton scattering events on two detecting planes, a photon can be localized to having been sourced on the surface of a cone. New algorithms are needed to achieve fully three-dimensional reconstruction of the source distribution from such a camera. If a complete set of cone-surface projections are collected over an infinitely extending plane, it is shown that the reconstruction problem is not only analytically solvable, but also overspecified in the absence of measurement uncertainties. Two approaches to direct reconstruction are proposed, both based on the photons which travel perpendicularly between the detector planes. Results of computer simulations are presented which demonstrate the ability of the algorithms to achieve useful reconstructions in the absence of measurement uncertainties (other than those caused by quantization). The modifications likely to be required in the presence of realistic measurement uncertainties are discussed.

  9. Data acquisition system based on the Nios II for a CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  10. Demonstration of three-dimensional imaging based on handheld Compton camera

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Taya, T.; Kabuki, S.

    2015-11-01

    Compton cameras are potential detectors that are capable of performing measurements across a wide energy range for medical imaging applications, such as in nuclear medicine and ion beam therapy. In previous work, we developed a handheld Compton camera to identify environmental radiation hotspots. This camera consists of a 3D position-sensitive scintillator array and multi-pixel photon counter arrays. In this work, we reconstructed the 3D image of a source via list-mode maximum likelihood expectation maximization and demonstrated the imaging performance of the handheld Compton camera. Based on both the simulation and the experiments, we confirmed that multi-angle data acquisition of the imaging region significantly improved the spatial resolution of the reconstructed image in the direction vertical to the detector. The experimental spatial resolutions in the X, Y, and Z directions at the center of the imaging region were 6.81 mm ± 0.13 mm, 6.52 mm ± 0.07 mm and 6.71 mm ± 0.11 mm (FWHM), respectively. Results of multi-angle data acquisition show the potential of reconstructing 3D source images.

  11. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  12. Development and calibration of the Moon-based EUV camera for Chang'e-3

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Song, Ke-Fei; Li, Zhao-Hui; Wu, Qing-Wen; Ni, Qi-Liang; Wang, Xiao-Dong; Xie, Jin-Jiang; Liu, Shi-Jie; He, Ling-Ping; He, Fei; Wang, Xiao-Guang; Chen, Bin; Zhang, Hong-Ji; Wang, Xiao-Dong; Wang, Hai-Feng; Zheng, Xin; E, Shu-Lin; Wang, Yong-Cheng; Yu, Tao; Sun, Liang; Wang, Jin-Ling; Wang, Zhi; Yang, Liang; Hu, Qing-Long; Qiao, Ke; Wang, Zhong-Su; Yang, Xian-Wei; Bao, Hai-Ming; Liu, Wen-Guang; Li, Zhe; Chen, Ya; Gao, Yang; Sun, Hui; Chen, Wen-Chang

    2014-12-01

    The process of development and calibration for the first Moon-based extreme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mechanism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6 nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geometric calibration, the absolute photometric calibration and the relative photometric calibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.

  13. A low-cost web-camera-based multichannel fiber-optic spectrometer structure

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun

    2010-11-01

    This paper shows how a web camera can be used to realize a low-cost multichannel fiber-optic spectrometer suitable for educational purposes as well as for quality control purposes in small and medium enterprises. Our key idea is to arrange N input optical fibers in a line and use an external dispersive element to separate incoming optical beams into their associated spectral components in a two-dimensional (2-D) space. As a web camera comes with a plastic lens, each set of spectral components is imaged onto the 2-D image sensor of the web camera. For our demonstration, we build a 5-channel web-camera based fiber-optic optical spectrometer and simply calibrate it by using eight lightsources with known peak wavelengths. In this way, it functions as a 5-channel wavelength meter in a 380-700 nm wavelength range with a calculated wavelength resolution of 0.67 nm/pixel. Experimental results show that peak operating wavelengths of a light emitting diode (λp = 525 nm) and a laser pointer (λp = 655 nm) can be measured with a +/-2.5 nm wavelength accuracy. Total cost of our 5-channel fiber-optic spectrometer is ~USD92.50.

  14. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  15. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  16. Star-field identification algorithm. [for implementation on CCD-based imaging camera

    NASA Technical Reports Server (NTRS)

    Scholl, M. S.

    1993-01-01

    A description of a new star-field identification algorithm that is suitable for implementation on CCD-based imaging cameras is presented. The minimum identifiable star pattern element consists of an oriented star triplet defined by three stars, their celestial coordinates, and their visual magnitudes. The algorithm incorporates tolerance to faulty input data, errors in the reference catalog, and instrument-induced systematic errors.

  17. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  18. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    NASA Astrophysics Data System (ADS)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  19. Using ground-based stereo cameras to derive cloud-level wind fields.

    PubMed

    Porter, John N; Cao, Guang Xia

    2009-08-15

    Upper-level wind fields are obtained by tracking the motion of cloud features as seen in calibrated ground-based stereo cameras. By tracking many cloud features, it is possible to obtain horizontal wind speed and direction over a cone area throughout the troposphere. Preliminary measurements were made at the Mauna Loa Observatory, and resulting wind measurements are compared with winds from the Hilo, Hawaii radiosondes. PMID:19684790

  20. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  1. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  2. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors.

    PubMed

    Calderón, Y; Chmeissani, M; Kolstein, M; De Lorenzo, G

    2014-06-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm(2) area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm(3). The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10(4) Bq source activities with equal efficiency and is completely saturated at 10(9) Bq. The efficiency of the system is evaluated using a simulated (18) F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8. PMID:24932209

  3. Line-based camera calibration with lens distortion correction from a single image

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Gao, He; Wang, Yexin

    2013-12-01

    Camera calibration is a fundamental and important step in many machine vision applications. For some practical situations, computing camera parameters from merely a single image is becoming increasingly feasible and significant. However, the existing single view based calibration methods have various disadvantages such as ignoring lens distortion, requiring some prior knowledge or special calibration environment, and so on. To address these issues, we propose a line-based camera calibration method with lens distortion correction from a single image using three squares with unknown length. Initially, the radial distortion coefficients are obtained through a non-linear optimization process which is isolated from the pin-hole model calibration, and the detected distorted lines of all the squares are corrected simultaneously. Subsequently, the corresponding lines used for homography estimation are normalized to avoid the specific unstable case, and the intrinsic parameters are calculated from the sufficient restrictions provided by vectors of homography matrix. To evaluate the performance of the proposed method, both simulative and real experiments have been carried out and the results show that the proposed method is robust under general conditions and it achieves comparable measurement accuracy in contrast with the traditional multiple view based calibration method using 2D chessboard target.

  4. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  5. 18F-Labeled Silicon-Based Fluoride Acceptors: Potential Opportunities for Novel Positron Emitting Radiopharmaceuticals

    PubMed Central

    Bernard-Gauthier, Vadim; Wängler, Carmen; Wängler, Bjoern; Schirrmacher, Ralf

    2014-01-01

    Background. Over the recent years, radiopharmaceutical chemistry has experienced a wide variety of innovative pushes towards finding both novel and unconventional radiochemical methods to introduce fluorine-18 into radiotracers for positron emission tomography (PET). These “nonclassical” labeling methodologies based on silicon-, boron-, and aluminium-18F chemistry deviate from commonplace bonding of an [18F]fluorine atom (18F) to either an aliphatic or aromatic carbon atom. One method in particular, the silicon-fluoride-acceptor isotopic exchange (SiFA-IE) approach, invalidates a dogma in radiochemistry that has been widely accepted for many years: the inability to obtain radiopharmaceuticals of high specific activity (SA) via simple IE. Methodology. The most advantageous feature of IE labeling in general is that labeling precursor and labeled radiotracer are chemically identical, eliminating the need to separate the radiotracer from its precursor. SiFA-IE chemistry proceeds in dipolar aprotic solvents at room temperature and below, entirely avoiding the formation of radioactive side products during the IE. Scope of Review. A great plethora of different SiFA species have been reported in the literature ranging from small prosthetic groups and other compounds of low molecular weight to labeled peptides and most recently affibody molecules. Conclusions. The literature over the last years (from 2006 to 2014) shows unambiguously that SiFA-IE and other silicon-based fluoride acceptor strategies relying on 18F− leaving group substitutions have the potential to become a valuable addition to radiochemistry. PMID:25157357

  6. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    PubMed Central

    Saha, Krishnendu; Straus, Kenneth J.; Chen, Yu.; Glick, Stephen J.

    2014-01-01

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction. PMID:25371555

  7. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    SciTech Connect

    Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  8. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography.

    PubMed

    Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction. PMID:25371555

  9. An enhanced high-resolution EMCCD-based gamma camera using SiPM side detection.

    PubMed

    Heemskerk, J W T; Korevaar, M A N; Huizenga, J; Kreuger, R; Schaart, D R; Goorden, M C; Beekman, F J

    2010-11-21

    Electron-multiplying charge-coupled devices (EMCCDs) coupled to scintillation crystals can be used for high-resolution imaging of gamma rays in scintillation counting mode. However, the detection of false events as a result of EMCCD noise deteriorates the spatial and energy resolution of these gamma cameras and creates a detrimental background in the reconstructed image. In order to improve the performance of an EMCCD-based gamma camera with a monolithic scintillation crystal, arrays of silicon photon-multipliers (SiPMs) can be mounted on the sides of the crystal to detect escaping scintillation photons, which are otherwise neglected. This will provide a priori knowledge about the correct number and energies of gamma interactions that are to be detected in each CCD frame. This information can be used as an additional detection criterion, e.g. for the rejection of otherwise falsely detected events. The method was tested using a gamma camera based on a back-illuminated EMCCD, coupled to a 3 mm thick continuous CsI:Tl crystal. Twelve SiPMs have been mounted on the sides of the CsI:Tl crystal. When the information of the SiPMs is used to select scintillation events in the EMCCD image, the background level for (99m)Tc is reduced by a factor of 2. Furthermore, the SiPMs enable detection of (125)I scintillations. A hybrid SiPM-/EMCCD-based gamma camera thus offers great potential for applications such as in vivo imaging of gamma emitters. PMID:21030743

  10. Modeling and simulation of Positron Emission Mammography (PEM) based on double-sided CdTe strip detectors

    NASA Astrophysics Data System (ADS)

    Ozsahin, I.; Unlu, M. Z.

    2014-03-01

    Breast cancer is the most common leading cause of cancer death among women. Positron Emission Tomography (PET) Mammography, also known as Positron Emission Mammography (PEM), is a method for imaging primary breast cancer. Over the past few years, PEMs based on scintillation crystals dramatically increased their importance in diagnosis and treatment of early stage breast cancer. However, these detectors have significant limitations like poor energy resolution resulting with false-negative result (missed cancer), and false-positive result which leads to suspecting cancer and suggests an unnecessary biopsy. In this work, a PEM scanner based on CdTe strip detectors is simulated via the Monte Carlo method and evaluated in terms of its spatial resolution, sensitivity, and image quality. The spatial resolution is found to be ~ 1 mm in all three directions. The results also show that CdTe strip detectors based PEM scanner can produce high resolution images for early diagnosis of breast cancer.

  11. Monte Carlo simulations of compact gamma cameras based on avalanche photodiodes.

    PubMed

    Després, Philippe; Funk, Tobias; Shah, Kanai S; Hasegawa, Bruce H

    2007-06-01

    Avalanche photodiodes (APDs), and in particular position-sensitive avalanche photodiodes (PSAPDs), are an attractive alternative to photomultiplier tubes (PMTs) for reading out scintillators for PET and SPECT. These solid-state devices offer high gain and quantum efficiency, and can potentially lead to more compact and robust imaging systems with improved spatial and energy resolution. In order to evaluate this performance improvement, we have conducted Monte Carlo simulations of gamma cameras based on avalanche photodiodes. Specifically, we investigated the relative merit of discrete and PSAPDs in a simple continuous crystal gamma camera. The simulated camera was composed of either a 4 x 4 array of four channels 8 x 8 mm2 PSAPDs or an 8 x 8 array of 4 x 4 mm2 discrete APDs. These configurations, requiring 64 channels readout each, were used to read the scintillation light from a 6 mm thick continuous CsI:Tl crystal covering the entire 3.6 x 3.6 cm2 photodiode array. The simulations, conducted with GEANT4, accounted for the optical properties of the materials, the noise characteristics of the photodiodes and the nonlinear charge division in PSAPDs. The performance of the simulated camera was evaluated in terms of spatial resolution, energy resolution and spatial uniformity at 99mTc (140 keV) and 125I ( approximately 30 keV) energies. Intrinsic spatial resolutions of 1.0 and 0.9 mm were obtained for the APD- and PSAPD-based cameras respectively for 99mTc, and corresponding values of 1.2 and 1.3 mm FWHM for 125I. The simulations yielded maximal energy resolutions of 7% and 23% for 99mTc and 125I, respectively. PSAPDs also provided better spatial uniformity than APDs in the simple system studied. These results suggest that APDs constitute an attractive technology especially suitable to build compact, small field of view gamma cameras dedicated, for example, to small animal or organ imaging. PMID:17505089

  12. Stereoscopic ground-based determination of the cloud base height: theory of camera position calibration with account for lens distortion

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Postylyakov, Oleg V.

    2016-05-01

    For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The earlier paper described a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras in paraxial (first-order geometric optics) approximation and was applied for the central part of the sky frames. This paper describes the model of experiment which takes into account lens distortion in Seidel approximation (depending on the third order of the distance from optical axis). We developed procedure of simultaneous camera position calibration and estimation of parameters of lens distortion in Seidel approximation.

  13. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%. PMID:26193503

  14. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  15. Binarization method based on evolution equation for document images produced by cameras

    NASA Astrophysics Data System (ADS)

    Wang, Yan; He, Chuanjiang

    2012-04-01

    We present an evolution equation-based binarization method for document images produced by cameras. Unlike the existing thresholding techniques, the idea behind our method is that a family of gradually binarized images is obtained by the solution of an evolution partial differential equation, starting with an original image. In our formulation, the evolution is controlled by a global force and a local force, both of which have opposite sign inside and outside the object of interests in the original image. A simple finite difference scheme with a significantly larger time step is used to solve the evolution equation numerically; the desired binarization is typically obtained after only one or two iterations. Experimental results on 122 camera document images show that our method yields good visual quality and OCR performance.

  16. Secure chaotic map based block cryptosystem with application to camera sensor networks.

    PubMed

    Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled

    2011-01-01

    Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network. PMID:22319371

  17. Electro optical design for a space camera based on MODTRAN data analysis

    NASA Astrophysics Data System (ADS)

    Haghshenas, Javad

    2014-11-01

    Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is discussed in this paper. An atmosphere analysis is performed based on ModTran algorithm and the total radiance of visible light reached to the camera entrance diameter is simulated by Atmosphere radiative transfer software PcModWin. Simulation is done for various conditions of sun zenith angles and earth surface albedos to predict the signal performance in different times and locations. According to the proposed simulation of total radiance incidence, appropriate linear CCD is chosen and then an optical design is done to completely satisfy electro-optics requirements. Optical design is based on Schmidt-Cassegrain scheme, which results in simple fabrication and high accuracy. Proposed electro-optical camera satisfies 5.9 meter ground resolution with image swath of higher than 23 km on the earth surface. Satellite is assumed to be at 681km altitude with 6.8km/s ground track speed.

  18. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  19. Infrared line cameras based on linear arrays for industrial temperature measurement

    NASA Astrophysics Data System (ADS)

    Drogmoeller, Peter; Hofmann, Guenter; Budzier, Helmut; Reichardt, Thomas; Zimmerhackl, Manfred

    2002-03-01

    The PYROLINE/ MikroLine cameras provide continuous, non-contact measurement of linear temperature distributions. Operation in conjunction with the IR_LINE software provides data recording, real-time graphical analysis, process integration and camera-control capabilities. One system is based on pyroelectric line sensors with either 128 or 256 elements, operating at frame rates of 128 and 544 Hz respectively. Temperatures between 0 and 1300DGRC are measurable in four distinct spectral ranges; 8-14micrometers for low temperatures, 3-5micrometers for medium temperatures, 4.8-5.2micrometers for glass-temperature applications and 1.4-1.8micrometers for high temperatures. A newly developed IR-line camera (HRP 250) based upon a thermoelectrically cooled, 160-element, PbSe detector array operating in the 3 - 5 micrometers spectral range permits the thermal gradients of fast moving targets to be measured in the range 50 - 180 degree(s)C at a maximum frequency of 18kHz. This special system was used to measure temperature distributions on rotating tires at velocities of more than 300 km/h (190 mph). A modified version of this device was used for real-time measurement of disk-brake rotors under load. Another line camera consisting a 256 element InGaAs array was developed for the spectral range of 1.4 - 1.8 micrometers to detect impurities of polypropylene and polyethylene in raw cotton at frequencies of 2.5 - 5 kHz.

  20. AOTF-based NO2 camera, results from the AROMAT-2 campaign

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Fussen, Didier; Vanhamel, Jurgen; Van Opstal, Bert; Maes, Jeroen; Merlaud, Alexis; Stebel, Kerstin; Schuettemeyer, Dirk

    2016-04-01

    A hyperspectral imager based on an acousto-optical tunable filter (AOTF) has been developed in the frame of the ALTIUS mission (atmospheric limb tracker for the investigation of the upcoming stratosphere). ALTIUS is a three-channel (UV, VIS, NIR) space-borne limb sounder aiming at the retrieval of concentration profiles of important trace species (O3, NO2, aerosols and more) with a good vertical resolution. An optical breadboard was built from the VIS channel concept and is now serving as a ground-based remote sensing instrument. Its good spectral resolution (0.6nm) coupled to its natural imaging capabilities (6° square field of view sampled by a 512x512 pixels sensor) make it suitable for the measurement of 2D fields of NO2, similarly to what is nowadays achieved with SO2 cameras. Our NO2 camera was one of the instruments that took part to the second Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT-2) campaign in August 2015. It was pointed to the smokestacks of the coal and oil burning power plant of Turceni (Romania) in order to image the exhausted field of NO2 and derive slant columns and instantaneous emission fluxes. The ultimate goal of the AROMAT campaigns is to prepare the validation of TROPOMI onboard Sentinel-5P. We will briefly describe the instrumental concept of the NO2 camera, its heritage from the ALTIUS mission, and its advantages compared to previous attempts of reaching the same goal. Key results obtained with the camera during the AROMAT-2 campaign will be presented and further improvements will be discussed.

  1. FPGA-Based Front-End Electronics for Positron Emission Tomography

    PubMed Central

    Haselman, Michael; DeWitt, Don; McDougald, Wendy; Lewellen, Thomas K.; Miyaoka, Robert; Hauck, Scott

    2010-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA’s low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm. PMID:21961085

  2. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    SciTech Connect

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-15

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because {sup 134}Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as {sup 40}K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi{sub 4}Ge{sub 3}O{sub 12} (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from {sup 134}Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  3. Performance evaluation of a dual-crystal APD-based detector modules for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Pepin, Catherine M.; Bérard, Philippe; Cadorette, Jules; Tétrault, Marc-André; Leroux, Jean-Daniel; Michaud, Jean-Baptiste; Robert, Stéfan; Dautet, Henri; Davies, Murray; Fontaine, Réjean; Lecomte, Roger

    2006-03-01

    Positron Emission Tomography (PET) scanners dedicated to small animal studies have seen a swift development in recent years. Higher spatial resolution, greater sensitivity and faster scanning procedures are the leading factors driving further improvements. The new LabPET TM system is a second-generation APD-based animal PET scanner that combines avalanche photodiode (APD) technology with a highly integrated, fully digital, parallel electronic architecture. This work reports on the performance characteristics of the LabPET quad detector module, which consists of LYSO/LGSO phoswich assemblies individually coupled to reach-through APDs. Individual crystals 2×2×~10 mm 3 in size are optically coupled in pair along one long side to form the phoswich detectors. Although the LYSO and LGSO photopeaks partially overlap, the good energy resolution and decay time difference allow for efficient crystal identification by pulse-shape discrimination. Conventional analog discrimination techniques result in significant misidentification, but advanced digital signal processing methods make it possible to circumvent this limitation, achieving virtually error-free decoding. Timing resolution results of 3.4 ns and 4.5 ns FWHM have been obtained for LYSO and LGSO, respectively, using analog CFD techniques. However, test bench measurements with digital techniques have shown that resolutions in the range of 2 to 4 ns FWHM can be achieved.

  4. Low background high efficiency radiocesium detection system based on positron emission tomography technology.

    PubMed

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-01

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because (134)Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as (40)K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi4Ge3O12 (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from (134)Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium. PMID:24089828

  5. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-01

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because 134Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as 40K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi4Ge3O12 (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from 134Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  6. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  7. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals. PMID:26406525

  8. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement. PMID:26928458

  9. Motion measurement of SAR antenna based on high frame rate camera

    NASA Astrophysics Data System (ADS)

    Li, Q.; Cao, R.; Feng, H.; Xu, Z.

    2015-03-01

    Synthetic Aperture Radar (SAR) is currently in the marine, agriculture, geology and other fields are widely used, while the SAR antenna is one of the most important subsystems. Performance of antenna has a significant impact on the SAR sensitivity, azimuth resolution, image blur degree and other parameter. To improve SAR resolution, SAR antenna is designed and fabricated according to flexible expandable style. However, the movement of flexible antenna will have a greater impact on accuracy of SAR systems, so the motion measurement of the flexible antenna is an urgent problem. This paper studied motion measurements method based on high frame rate camera, designed and completed a flexible antenna motion measurement experiment. In the experiment the main IMU and the sub IMU were placed at both ends of the cantilever, which is simulation of flexible antenna, the high frame rate camera was placed above the main IMU, and the imaging target was set on side of the sub IMU. When the cantilever motion occurs, IMU acquired spatial coordinates of cantilever movement in real-time, and high frame rate camera captured a series of target images, and then the images was input into JTC to obtain the cantilever motion coordinates. Through the contrast and analysis of measurement results, the measurement accuracy of flexible antenna motion is verified.

  10. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  11. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  12. Range camera calibration based on image sequences and dense comprehensive error statistics

    NASA Astrophysics Data System (ADS)

    Karel, Wilfried; Pfeifer, Norbert

    2009-01-01

    This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual distortion factors separately, in the presented approach all calculations are based on the same data set that is captured without auxiliary devices serving as high-order reference, but with the camera being guided by hand. Flat, circular targets stuck on a planar whiteboard and with known positions are automatically tracked throughout the amplitude layer of long image sequences. These image observations are introduced into a bundle block adjustment, which on the one hand results in the determination of the interior orientation. Capitalizing the known planarity of the imaged board, the reconstructed exterior orientations furthermore allow for the derivation of reference values of the actual distance observations. Eased by the automatic reconstruction of the cameras trajectory and attitude, comprehensive statistics are generated, which are accumulated into a 5-dimensional matrix in order to be manageable. The marginal distributions of this matrix are inspected for the purpose of system identification, whereupon its elements are introduced into another least-squares adjustment, finally leading to clear range correction models and parameters.

  13. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  14. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  15. Random versus Game Trail-Based Camera Trap Placement Strategy for Monitoring Terrestrial Mammal Communities

    PubMed Central

    Cusack, Jeremy J.; Dickman, Amy J.; Rowcliffe, J. Marcus; Carbone, Chris; Macdonald, David W.; Coulson, Tim

    2015-01-01

    Camera trap surveys exclusively targeting features of the landscape that increase the probability of photographing one or several focal species are commonly used to draw inferences on the richness, composition and structure of entire mammal communities. However, these studies ignore expected biases in species detection arising from sampling only a limited set of potential habitat features. In this study, we test the influence of camera trap placement strategy on community-level inferences by carrying out two spatially and temporally concurrent surveys of medium to large terrestrial mammal species within Tanzania’s Ruaha National Park, employing either strictly game trail-based or strictly random camera placements. We compared the richness, composition and structure of the two observed communities, and evaluated what makes a species significantly more likely to be caught at trail placements. Observed communities differed marginally in their richness and composition, although differences were more noticeable during the wet season and for low levels of sampling effort. Lognormal models provided the best fit to rank abundance distributions describing the structure of all observed communities, regardless of survey type or season. Despite this, carnivore species were more likely to be detected at trail placements relative to random ones during the dry season, as were larger bodied species during the wet season. Our findings suggest that, given adequate sampling effort (> 1400 camera trap nights), placement strategy is unlikely to affect inferences made at the community level. However, surveys should consider more carefully their choice of placement strategy when targeting specific taxonomic or trophic groups. PMID:25950183

  16. Camera characterization using back-propagation artificial neutral network based on Munsell system

    NASA Astrophysics Data System (ADS)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  17. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, V.; Djambazov, L.; Dorner, D.; Farnier, C.; Gendotti, A.; Grimm, O.; von Gunten, H. P.; Hildebrand, D.; Horisberger, U.; Huber, B.; Kim, K.-S.; Köhne, J.-H.; Krähenbühl, T.; Krumm, B.; Lee, M.; Lenain, J.-P.; Lorenz, E.; Lustermann, W.; Lyard, E.; Mannheim, K.; Meharga, M.; Neise, D.; Nessi-Tedaldi, F.; Overkemping, A.-K.; Pauss, F.; Renker, D.; Rhode, W.; Ribordy, M.; Rohlfs, R.; Röser, U.; Stucki, J.-P.; Thaele, J.; Tibolla, O.; Viertel, G.; Vogler, P.; Walter, R.; Warda, K.; Weitzel, Q.

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  18. Optimum design of the carbon fiber thin-walled baffle for the space-based camera

    NASA Astrophysics Data System (ADS)

    Yan, Yong; Song, Gu; Yuan, An; Jin, Guang

    2011-08-01

    The thin-walled baffle design of the space-based camera is an important job in the lightweight space camera research task for its stringent quality requirement and harsh mechanical environment especially for the thin-walled baffle of the carbon fiber design. In the paper, an especially thin-walled baffle of the carbon fiber design process was described and it is sound significant during the other thin-walled baffle design of the space camera. The designer obtained the design margin of the thin-walled baffle that structural stiffness and strength can tolerated belong to its development requirements through the appropriate use of the finite element analysis of the walled parameters influence sensitivity to its structural stiffness and strength. And the designer can determine the better optimization criterion of thin-walled baffle during the geometric parameter optimization process in such guiding principle. It sounds significant during the optimum design of the thin-walled baffle of the space camera. For structural stiffness and strength of the carbon fibers structure which can been designed, the effect of the optimization will be more remarkable though the optional design of the parameters chose. Combination of manufacture process and design requirements the paper completed the thin-walled baffle structure scheme selection and optimized the specific carbon fiber fabrication technology though the FEM optimization, and the processing cost and process cycle are retrenchment/saved effectively in the method. Meanwhile, the weight of the thin-walled baffle reduced significantly in meet the design requirements under the premise of the structure. The engineering prediction had been adopted, and the related result shows that the thin-walled baffle satisfied the space-based camera engineering practical needs very well, its quality reduced about 20%, the final assessment index of the thin-walled baffle were superior to the overall design requirements significantly. The design

  19. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    NASA Astrophysics Data System (ADS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-07-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an

  20. Microstructural probing of ferritic/martensitic steels using internal transmutation-based positron source

    NASA Astrophysics Data System (ADS)

    Krsjak, Vladimir; Dai, Yong

    2015-10-01

    This paper presents the use of an internal 44Ti/44Sc radioisotope source for a direct microstructural characterization of ferritic/martensitic (f/m) steels after irradiation in targets of spallation neutron sources. Gamma spectroscopy measurements show a production of ∼1MBq of 44Ti per 1 g of f/m steels irradiated at 1 dpa (displaced per atom) in the mixed proton-neutron spectrum at the Swiss spallation neutron source (SINQ). In the decay chain 44Ti → 44Sc → 44Ca, positrons are produced together with prompt gamma rays which enable the application of different positron annihilation spectroscopy (PAS) analyses, including lifetime and Doppler broadening spectroscopy. Due to the high production yield, long half-life and relatively high energy of positrons of 44Ti, this methodology opens up new potential for simple, effective and inexpensive characterization of radiation induced defects in f/m steels irradiated in a spallation target.

  1. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.

    PubMed

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  2. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    PubMed Central

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  3. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  4. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    PubMed Central

    Parodi, K; Ferrari, A; Sommerer, F; Paganetti, H

    2008-01-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography / computed tomography) imaging for in-vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modeling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield Unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  5. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  6. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  7. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  8. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    NASA Astrophysics Data System (ADS)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  9. Camera Based Closed Loop Control for Partial Penetration Welding of Overlap Joints

    NASA Astrophysics Data System (ADS)

    Abt, F.; Heider, A.; Weber, R.; Graf, T.; Blug, A.; Carl, D.; Höfler, H.; Nicolosi, L.; Tetzlaff, R.

    Welding of overlap joints with partial penetration in automotive applications is a challenging process, since the laser power must be set very precisely to achieve a proper connection between the two joining partners without damaging the backside of the sheet stack. Even minor changes in welding conditions can lead to bad results. To overcome this problem a camera based closed loop control for partial penetration welding of overlap joints was developed. With this closed loop control it is possible to weld such configurations with a stable process result even under changing welding conditions.

  10. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  11. Novel fundus camera design

    NASA Astrophysics Data System (ADS)

    Dehoog, Edward A.

    A fundus camera a complex optical system that makes use of the principle of reflex free indirect ophthalmoscopy to image the retina. Despite being in existence as early as 1900's, little has changed in the design of a fundus camera and there is minimal information about the design principles utilized. Parameters and specifications involved in the design of fundus camera are determined and their affect on system performance are discussed. Fundus cameras incorporating different design methods are modeled and a performance evaluation based on design parameters is used to determine the effectiveness of each design strategy. By determining the design principles involved in the fundus camera, new cameras can be designed to include specific imaging modalities such as optical coherence tomography, imaging spectroscopy and imaging polarimetry to gather additional information about properties and structure of the retina. Design principles utilized to incorporate such modalities into fundus camera systems are discussed. Design, implementation and testing of a snapshot polarimeter fundus camera are demonstrated.

  12. Positron and Ion Migrations and the Attractive Interactions between like Ion Pairs in the Liquids: Based on Studies with Slow Positron Beam

    NASA Astrophysics Data System (ADS)

    Kanazawa, I.; Sasaki, T.; Yamada, K.; Imai, E.

    2014-04-01

    We have discussed positron and ion diffusions in liquids by using the gauge-invariant effection Lagrange density with the spontaneously broken density (the hedgehog-like density) with the internal non-linear gauge fields (Yaug-Mills gauge fields), and have presented the relation to the Hubbard-Onsager theory.

  13. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  14. Positron Physics

    NASA Technical Reports Server (NTRS)

    Drachman, Richard J.

    2003-01-01

    I will give a review of the history of low-energy positron physics, experimental and theoretical, concentrating on the type of work pioneered by John Humberston and the positronics group at University College. This subject became a legitimate subfield of atomic physics under the enthusiastic direction of the late Sir Harrie Massey, and it attracted a diverse following throughout the world. At first purely theoretical, the subject has now expanded to include high brightness beams of low-energy positrons, positronium beams, and, lately, experiments involving anti-hydrogen atoms. The theory requires a certain type of persistence in its practitioners, as well as an eagerness to try new mathematical and numerical techniques. I will conclude with a short summary of some of the most interesting recent advances.

  15. Replacing 16 mm film cameras with high definition digital cameras

    SciTech Connect

    Balch, K.S.

    1995-12-31

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne gun cameras, range tracking and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost effective solution. Film-based cameras still produce the best resolving capability, however, film development time, chemical disposal, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new camera from Kodak that has been designed to replace 16 mm high speed film cameras.

  16. Fast time-of-flight camera based surface registration for radiotherapy patient positioning

    SciTech Connect

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-15

    Purpose: This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. Methods: A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an ''ICP only'' strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. Results: The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 {+-} 1.08 mm and 0.07 deg. {+-} 0.05 deg., respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. Conclusions: The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface

  17. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  18. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  19. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  20. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera.

    PubMed

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  1. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  2. Calibration and disparity maps for a depth camera based on a four-lens device

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Colicchio, Bruno; Lauffenburger, Jean Philippe; Haeberlé, Olivier; Cudel, Christophe

    2015-11-01

    We propose a model of depth camera based on a four-lens device. This device is used for validating alternate approaches for calibrating multiview cameras and also for computing disparity or depth images. The calibration method arises from previous works, where principles of variable homography were extended for three-dimensional (3-D) measurement. Here, calibration is performed between two contiguous views obtained on the same image sensor. This approach leads us to propose a new approach for simplifying calibration by using the properties of the variable homography. Here, the second part addresses new principles for obtaining disparity images without any matching. A fast algorithm using a contour propagation algorithm is proposed without requiring structured or random pattern projection. These principles are proposed in a framework of quality control by vision, for inspection in natural illumination. By preserving scene photometry, some other standard controls, as for example calipers, shape recognition, or barcode reading, can be done conjointly with 3-D measurements. Approaches presented here are evaluated. First, we show that rapid calibration is relevant for devices mounted with multiple lenses. Second, synthetic and real experimentations validate our method for computing depth images.

  3. Full 3-D cluster-based iterative image reconstruction tool for a small animal PET camera

    NASA Astrophysics Data System (ADS)

    Valastyán, I.; Imrek, J.; Molnár, J.; Novák, D.; Balkay, L.; Emri, M.; Trón, L.; Bükki, T.; Kerek, A.

    2007-02-01

    Iterative reconstruction methods are commonly used to obtain images with high resolution and good signal-to-noise ratio in nuclear imaging. The aim of this work was to develop a scalable, fast, cluster based, fully 3-D iterative image reconstruction package for our small animal PET camera, the miniPET. The reconstruction package is developed to determine the 3-D radioactivity distribution from list mode type of data sets and it can also simulate noise-free projections of digital phantoms. We separated the system matrix generation and the fully 3-D iterative reconstruction process. As the detector geometry is fixed for a given camera, the system matrix describing this geometry is calculated only once and used for every image reconstruction, making the process much faster. The Poisson and the random noise sensitivity of the ML-EM iterative algorithm were studied for our small animal PET system with the help of the simulation and reconstruction tool. The reconstruction tool has also been tested with data collected by the miniPET from a line and a cylinder shaped phantom and also a rat.

  4. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  5. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera.

    PubMed

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  6. A positioning system for forest diseases and pests based on GIS and PTZ camera

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Wang, L. L.; Zhao, F. F.; Wang, C. B.

    2014-03-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments.

  7. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  8. Parkinson's disease assessment based on gait analysis using an innovative RGB-D camera system.

    PubMed

    Rocha, Ana Patrícia; Choupina, Hugo; Fernandes, José Maria; Rosas, Maria José; Vaz, Rui; Silva Cunha, João Paulo

    2014-01-01

    Movement-related diseases, such as Parkinson's disease (PD), progressively affect the motor function, many times leading to severe motor impairment and dramatic loss of the patients' quality of life. Human motion analysis techniques can be very useful to support clinical assessment of this type of diseases. In this contribution, we present a RGB-D camera (Microsoft Kinect) system and its evaluation for PD assessment. Based on skeleton data extracted from the gait of three PD patients treated with deep brain stimulation and three control subjects, several gait parameters were computed and analyzed, with the aim of discriminating between non-PD and PD subjects, as well as between two PD states (stimulator ON and OFF). We verified that among the several quantitative gait parameters, the variance of the center shoulder velocity presented the highest discriminative power to distinguish between non-PD, PD ON and PD OFF states (p = 0.004). Furthermore, we have shown that our low-cost portable system can be easily mounted in any hospital environment for evaluating patients' gait. These results demonstrate the potential of using a RGB-D camera as a PD assessment tool. PMID:25570653

  9. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  10. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera

    PubMed Central

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  11. Portable Positron Measurement System (PPMS)

    SciTech Connect

    2011-01-01

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  12. Portable Positron Measurement System (PPMS)

    ScienceCinema

    None

    2013-05-28

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  13. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  14. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. PMID:23312587

  15. Real-time neural network based camera localization and its extension to mobile robot control.

    PubMed

    Choi, D H; Oh, S Y

    1997-06-01

    The feasibility of using neural networks for camera localization and mobile robot control is investigated here. This approach has the advantages of eliminating the laborious and error-prone process of imaging system modeling and calibration procedures. Basically, two different approaches of using neural networks are introduced of which one is a hybrid approach combining neural networks and the pinhole-based analytic solution while the other is purely neural network based. These techniques have been tested and compared through both simulation and real-time experiments and are shown to yield more precise localization than analytic approaches. Furthermore, this neural localization method is also shown to be directly applicable to the navigation control of an experimental mobile robot along the hallway purely guided by a dark wall strip. It also facilitates multi-sensor fusion through the use of multiple sensors of different types for control due to the network's capability of learning without models. PMID:9427102

  16. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras.

    PubMed

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  17. Scintillators for positron emission tomography

    SciTech Connect

    Moses, W.W.; Derenzo, S.E.

    1995-09-01

    Like most applications that utilize scintillators for gamma detection, Positron Emission Tomography (PET) desires materials with high light output, short decay time, and excellent stopping power that are also inexpensive, mechanically rugged, and chemically inert. Realizing that this ``ultimate`` scintillator may not exist, this paper evaluates the relative importance of these qualities and describes their impact on the imaging performance of PET. The most important PET scintillator quality is the ability to absorb 511 keV photons in a small volume, which affects the spatial resolution of the camera. The dominant factor is a short attenuation length ({le} 1.5 cm is required), although a high photoelectric fraction is also important (> 30% is desired). The next most important quality is a short decay time, which affects both the dead time and the coincidence timing resolution. Detection rates for single 511 keV photons can be extremely high, so decay times {le} 500 ns are essential to avoid dead time losses. In addition, positron annihilations are identified by time coincidence so {le}5 ns fwhm coincidence pair timing resolution is required to identify events with narrow coincidence windows, reducing contamination due to accidental coincidences. Current trends in PET cameras are toward septaless, ``fully-3D`` cameras, which have significantly higher count rates than conventional 2-D cameras and so place higher demands on scintillator decay time. Light output affects energy resolution, and thus the ability of the camera to identify and reject events where the initial 511 keV photon has undergone Compton scatter in the patient. The scatter to true event fraction is much higher in fully-3D cameras than in 2-D cameras, so future PET cameras would benefit from scintillators with a 511 keV energy resolution < 10--12% fwhm.

  18. A rut measuring method based on laser triangulation with single camera

    NASA Astrophysics Data System (ADS)

    Ma, Yue; Zhang, Wen-hao; Li, Song; Wang, Hong

    2013-12-01

    Pavement rut is one of the major highway diseases. In this article, the method of measuring rut based on laser triangulation is created. The rut-resolution model is established to design the parameters of optical system and the laser profile of road image was extracted by median filter and wavelet transform. An accurate calibration method on large field of view consisting of 28 sub-FOVs calibration and road profile reconstruction is also created. The process of calibration experiment and a new calculation method of rut is described. The measurement results were showed, which concluded the static test of gauge block and the dynamic measurement on highway. The conclusion is with this method using single CCD camera and two semiconductor laser of 808nm wavelength can reach the accuracy of 1mm on rut measurement in tough circumstance.

  19. Portable profilometer based on low-coherence interferometry and smart pixel camera

    NASA Astrophysics Data System (ADS)

    Salbut, Leszek; Pakuła, Anna; Tomczewski, Sławomir; Styk, Adam

    2010-09-01

    Although low coherence interferometers are commercially available (e.g., white light interferometers), they are generally quite bulky, expensive, and offer limited flexibility. In the paper the new portable profilometer based on low coherence interferometry is presented. In the device the white light diode with controlled spectrum shape is used in order to increase the zero order fringe contrast, what allows for its better and quicker localization. For image analysis the special type of CMOS matrix (called smart pixel camera), synchronized with reference mirror transducer, is applied. Due to hardware realization of the fringe contrast analysis, independently in each pixel, the time of measurement decreases significantly. High speed processing together with compact design allows that profilometer to be used as the portable device for both in and out door measurements. The capabilities of the designed profilometer are well illustrated by a few application examples.

  20. Design and fabrication of MEMS-based thermally-actuated image stabilizer for cell phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    A micro-electro-mechanical system (MEMS)-based image stabilizer is proposed to counteracting shaking in cell phone cameras. The proposed stabilizer (dimensions, 8.8 × 8.8 × 0.2 mm3) includes a two-axis decoupling XY stage and has sufficient strength to suspend an image sensor (IS) used for anti-shaking function. The XY stage is designed to send electrical signals from the suspended IS by using eight signal springs and 24 signal outputs. The maximum actuating distance of the stage is larger than 25 μm, which is sufficient to resolve the shaking problem. Accordingly, the applied voltage for the 25 μm moving distance is lower than 20 V; the dynamic resonant frequency of the actuating device is 4485 Hz, and the rising time is 21 ms.

  1. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  2. A novel method to measure the ambient aerosol phase function based on dual ccd-camera

    NASA Astrophysics Data System (ADS)

    Bian, Yuxuan; Zhao, Chunsheng; Tao, Jiangchuan; Kuang, Ye; Zhao, Gang

    2016-04-01

    Aerosol scattering phase function is a measure of the light intensity scattered from particles as a function of scattering angles. It's important for understanding the aerosol climate effects and remote sensing inversion analysis. In this study, a novel method to measure the ambient aerosol phase function is developed based on a dual charge-coupled device(ccd) camera laser detective system. An integrating nephelometer is used to correct the inversion result. The instrument was validated by both field and laboratory measurements of atmospheric aerosols. A Mie theory model was used with the measurements of particle number size distribution and mass concentration of black carbon to simulate the aerosol phase function for comparison with the values from the instrument. The comparison shows a great consistency.

  3. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves. PMID:22016249

  4. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  5. Cross-ratio-based line scan camera calibration using a planar pattern

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Wen, Gongjian; Qiu, Shaohua

    2016-01-01

    A flexible new technique is proposed to calibrate the geometric model of line scan cameras. In this technique, the line scan camera is rigidly coupled to a calibrated frame camera to establish a pair of stereo cameras. The linear displacements and rotation angles between the two cameras are fixed but unknown. This technique only requires the pair of stereo cameras to observe a specially designed planar pattern shown at a few (at least two) different orientations. At each orientation, a stereo pair is obtained including a linear array image and a frame image. Radial distortion of the line scan camera is modeled. The calibration scheme includes two stages. First, point correspondences are established from the pattern geometry and the projective invariance of cross-ratio. Second, with a two-step calibration procedure, the intrinsic parameters of the line scan camera are recovered from several stereo pairs together with the rigid transform parameters between the pair of stereo cameras. Both computer simulation and real data experiments are conducted to test the precision and robustness of the calibration algorithm, and very good calibration results have been obtained. Compared with classical techniques which use three-dimensional calibration objects or controllable moving platforms, our technique is affordable and flexible in close-range photogrammetric applications.

  6. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  7. Estimating the spatial position of marine mammals based on digital camera recordings.

    PubMed

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-02-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator-prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  8. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V.

    2016-06-01

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ( τ c ) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ( τ c ˜ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  9. Investigation of ionic conductivity of polymeric electrolytes based on poly (ether urethane) networks using positron probe

    NASA Astrophysics Data System (ADS)

    Peng, Z. L.; Wang, B.; Li, S. Q.; Wang, S. J.; Liu, H.; Xie, H. Q.

    1994-10-01

    Positron-lifetime measurements have been made for poly (ether urethane) undoped and doped with [LiClO 4]/[Unit]=0.05 in the temperature range of 120-340 K. The measured lifetime spectra were resolved into three components. The lifetime and the intensity of orthopositronium were used to evaluate the amount of the free volume in poly (ether urethane). It was found that the variation of ionic conductivity with temperature and salt concentration can be rationalised in terms of free volume consideration.

  10. Improvement of the GRACE star camera data based on the revision of the combination method

    NASA Astrophysics Data System (ADS)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data