Science.gov

Sample records for camera based positron

  1. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A positron camera is a device intended to image...

  2. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  3. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  4. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  5. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  6. Clinical applications with the HIDAC positron camera

    NASA Astrophysics Data System (ADS)

    Frey, P.; Schaller, G.; Christin, A.; Townsend, D.; Tochon-Danguy, H.; Wensveen, M.; Donath, A.

    1988-06-01

    A high density avalanche chamber (HIDAC) positron camera has been used for positron emission tomographic (PET) imaging in three different human studies, including patients presenting with: (I) thyroid diseases (124 cases); (II) clinically suspected malignant tumours of the pharynx or larynx (ENT) region (23 cases); and (III) clinically suspected primary malignant and metastatic tumours of the liver (9 cases, 19 PET scans). The positron emitting radiopharmaceuticals used for the three studies were Na 124I (4.2 d half-life) for the thyroid, 55Co-bleomycin (17.5 h half-life) for the ENT-region and 68Ga-colloid (68 min half-life) for the liver. Tomographic imaging was performed: (I) 24 h after oral Na 124I administration to the thyroid patients, (II) 18 h after intraveneous administration of 55Co-bleomycin to the ENT patients and (III) 20 min following the intraveneous injection of 68Ga-colloid to the liver tumour patients. Three different imaging protocols were used with the HIDAC positron camera to perform appropriate tomographic imaging in each patient study. Promising results were obtained in all three studies, particularly in tomographic thyroid imaging, where a significant clinical contribution is made possible for diagnosis and therapy planning by the PET technique. In the other two PET studies encouraging results were obtained for the detection and precise localisation of malignant tumour disease including an estimate of the functional liver volume based on the reticulo-endothelial-system (RES) of the liver, obtained in vivo, and the three-dimensional display of liver PET data using shaded graphics techniques. The clinical significance of the overall results obtained in both the ENT and the liver PET study, however, is still uncertain and the respective role of PET as a new imaging modality in these applications is not yet clearly established. To appreciate the clinical impact made by PET in liver and ENT malignant tumour staging needs further investigation

  7. A simple data loss model for positron camera systems

    SciTech Connect

    Eriksson, L. . Dept. of Clinical Neurophysiology); Wienhard, K. ); Dahlbom, M. . School of Medicine)

    1994-08-01

    A simple model to describe data losses in PET cameras is presented. The model is not intended to be used primarily for dead time corrections in existing scanners, although this is possible. Instead the model is intended to be used for data simulations in order to determine the figures of merit of future camera systems, based on data handling state-of-art solutions. The model assumes the data loss to be factorized into two components, one describing the detector or block-detector performance and the other the remaining data handling such as coincidence determinations, data transfer and data storage. Two modern positron camera systems have been investigated in terms of this model. These are the Siemens-CTI ECAT EXACT and ECAT EXACT HR systems, which both have an axial field-of-view (FOV) of about 15 cm. They both have retractable septa and can acquire data from the whole volume within the FOV and can reconstruct volume image data. An example is given how to use the model for live time calculation in a futuristic large axial FOV cylindrical system.

  8. A preliminary evaluation of a positron camera system using weighted decoding of individual crystals

    SciTech Connect

    Holtel, S.; Eriksson, L.; Larsson, J.E.; Ericson, T.; Stjernberg, H.; Hansen, P.; Bohm, C.; Kesselberg, M.; Rota, E.; Herzog, H.

    1988-02-01

    The whole-body positron camera PC4096-15WB is now in operation. It is based on a detection unit with sixteen scintillating crystals mounted on two dual photomultiplier tubes. The design ideas are specified and a system description is given. Preliminary test results including spatial resolution, sensitivity to true and random coincidences, scatter correction, and count rate linearity are presented.

  9. Positron camera using position-sensitive detectors: PENN-PET

    SciTech Connect

    Muehllehner, G.; Karp, J.S.

    1986-01-01

    A single-slice positron camera has been developed with good spatial resolution and high count rate capability. The camera uses a hexagonal arrangement of six position-sensitive NaI(Tl) detectors. The count rate capability of NaI(Tl) was extended to 800k cps through the use of pulse shortening. In order to keep the detectors stationary, an iterative reconstruction algorithm was modified which ignores the missing data in the gaps between the six detectors and gives artifact-free images. The spatial resolution, as determined from the image of point sources in air, is 6.5 mm full width at half maximum. We have also imaged a brain phantom and dog hearts.

  10. Development of the LBNL positron emission mammography camera

    SciTech Connect

    Huber, Jennifer S.; Choong, Woon-Seng; Wang, Jimmy; Maltz, Jonathon S.; Qi, Jinyi; Mandelli, Emanuele; Moses, William W.

    2002-12-19

    We present the construction status of the LBNL Positron Emission Mammography (PEM) camera, which utilizes a PET detector module with depth of interaction measurement consisting of 64 LSO crystals (3x3x30 mm3) coupled on one end to a single photomultiplier tube (PMT) and on the opposite end to a 64 pixel array of silicon photodiodes (PDs). The PMT provides an accurate timing pulse, the PDs identify the crystal of interaction, the sum provides a total energy signal, and the PD/(PD+PMT) ratio determines the depth of interaction. We have completed construction of all 42 PEM detector modules. All data acquisition electronics have been completed, fully tested and loaded onto the gantry. We have demonstrated that all functions of the custom IC work using the production rigid-flex boards and data acquisition system. Preliminary detector module characterization and coincidence data have been taken using the production system, including initial images.

  11. Evaluation of the Karolinska new positron camera system; The Scanitronix PC2048-15B

    SciTech Connect

    Litton, J.E.; Holte, S.; Eroksson, L. )

    1990-04-01

    The PC2048-15B is the brain version of the new generation of Scanditronix positron camera systems, now being installed at the Karolinska Hospital in Stockholm, Sweden. The system has eight rings with 256 6*12*30 mm{sup 3} BGO crystals per ring. Data corresponding to fifteen image slices are simultaneously recorded. The detector rings are based on block-detector units, each with sixteen scintillating crystals mounted on two dual photomultiplier tubes. A system description is given and test results including spatial resolution, sensitivity to true and random coincidences, scatter correction, count rate linearity, spatial independence and reproducibility are presented. Clinical results are discussed, including rCBF studies and receptor studies, to cover the two most extreme situations met in brain PERT studies, i.e., high count rate and high resolution requirements.

  12. The electronics system for the LBNL positron emission tomography (PEM) camera

    SciTech Connect

    Moses, W.W.; Young, J.W.; Baker, K.; Jones, W.; Lenox, M.; Ho, M.H.; Weng, M.

    2000-11-04

    We describe the electronics for a high performance Positron Emission Mammography (PEM) camera. It is based on the electronics for a human brain PET camera (the Siemens/CTI HRRT), modified to use a detector module that incorporates a photodiode (PD) array. An ASIC services the PD array, amplifying its signal and identifying the crystal of interaction. Another ASIC services the photomultiplier tube (PMT), measuring its output and providing a timing signal. Field programmable gate arrays (FPGAs) and lookup RAMs are used to apply crystal by crystal correction factors and measure the energy deposit and the interaction depth (based on the PD/PMT ratio). Additional FPGAs provide event multiplexing, derandomization, coincidence detection, and real-time rebinning. Embedded PC/104 microprocessors provide communication, real-time control, and configure the system. Extensive use of FPGAs makes the overall design extremely flexible, allowing many different functions (or design modifications) to be realized without hardware changes. Incorporation of extensive onboard diagnostics, implemented in the FPGAs, is required by the very high level of integration and density achieved by this system.

  13. Development of a high resolution beta camera for a direct measurement of positron distribution on brain surface

    SciTech Connect

    Yamamoto, S.; Seki, C.; Kashikura, K.

    1996-12-31

    We have developed and tested a high resolution beta camera for a direct measurement of positron distribution on brain surface of animals. The beta camera consists of a thin CaF{sub 2}(Eu) scintillator, a tapered fiber optics plate (taper fiber) and a position sensitive photomultiplier tube (PSPMT). The taper fiber is the key component of the camera. We have developed two types of beta cameras. One is 20mm diameter field of view camera for imaging brain surface of cats. The other is 10mm diameter camera for that of rats. Spatial resolutions of beta camera for cats and rats were 0.8mm FWHM and 0.5mm FWHM, respectively. We confirmed that developed beta cameras may overcome the limitation of the spatial resolution of the positron emission tomography (PET).

  14. Figures of merit for different detector configurations utilized in high resolution positron cameras

    SciTech Connect

    Eriksson, L.; Bergstrom, M.; Rohm, C.; Holte, S.; Kesselberg, M.

    1986-02-01

    A new positron camera system is currently being designed. The goal is an instrument that can measure the whole brain with a spatial resolution of 5 mm FWHM in all directions. In addition to the high spatial resolution, the system must be able to handle count rates of 0.5 MHz or more in order to perform accurate fast dynamic function studies such as the determination of cerebral blood flow and cerebral oxygen consumption following a rapid bolus. An overall spatial resolution of 5 mm requires crystal dimensions of 6 x 6 x L mm/sup 3/, or less, L being the length of the crystal. Timing and energy requirements necessitate high performance photomultipliers. The identification of the small size scintillator crystals can currently only be handled in schemes based on the Anger technique, in the future possibly with photodiodes. In the present work different crystal identification schemes have been investigated. The investigations have involved efficiency measurements of different scintillators, line spread function studies and the evaluation of different coding schemes in order to identify small crystals.

  15. A CF4 based positron trap

    NASA Astrophysics Data System (ADS)

    Marjanovic, Srdjan; Bankovic, Ana; Dujko, Sasa; Deller, Adam; Cooper, Ben; Cassidy, David; Petrovic, Zoran

    2016-05-01

    All positron buffer gas traps in use rely on N2 as the primary trapping gas due to its conveniently placed a1 Π electronic excitation cross section that is large enough to compete with positronium (Ps) formation in the threshold region. Its energy loss of 8.5 eV is sufficient to capture positrons into a potential well upon a single collision. The competing Ps formation, however, limits the efficiency of the two stage trap to 25 %. As positron moderators produce beams with energies of several eV we have proposed to use CF4 in the first stage of the trap, due to its large vibrational excitation cross section, where several vibrational excitations would be sufficient to trap the positrons with small losses. Apart from the simulations we also report the results of attempts to apply this approach to an existing Surko-type positron trap. Operating the unmodified trap as a CF4 based device proved to be unsuccessful, due primarily to excessive scattering due to high CF4 pressure in the first stage. However, the performance was consistent with subsequent simulations using the real system parameters. This agreement indicates that an efficient CF4 based scheme may be realized in an appropriately designed trap. also at Serbian Academy of Sciences and Arts, Knez Mihajlova 35, 11000 Belgrade, Serbia.

  16. Design of POSICAM: A high resolution multislice whole body positron camera

    SciTech Connect

    Mullani, N.A.; Wong, W.H.; Hartz, R.K.; Bristow, D.; Gaeta, J.M.; Yerian, K.; Adler, S.; Gould, K.L.

    1985-01-01

    A high resolution (6mm), multislice (21) whole body positron camera has been designed with innovative detector and septa arrangement for 3-D imaging and tracer quantitation. An object of interest such as the brain and the heart is optimally imaged by the 21 simultaneous image planes which have 12 mm resolution and are separated by 5.5 mm to provide adequate sampling in the axial direction. The detector geometry and the electronics are flexible enough to allow BaF/sub 2/, BGO, GSO or time of flight BaF/sub 2/ scintillators. The mechanical gantry has been designed for clinical applications and incorporates several features for patient handling and comfort. A large patient opening of 58 cm diameter with a tilt of +-30/sup 0/ and rotation of +-20/sup 0/ permit imaging from different positions without moving the patient. Multiprocessor computing systems and user-friendly software make the POSICAM a powerful 3-D imaging device. 7 figs.

  17. Color measurements based on a color camera

    NASA Astrophysics Data System (ADS)

    Marszalec, Elzbieta A.; Pietikaeinen, Matti

    1997-08-01

    The domain of color camera applications is increasing all time due to recent progress in color machine vision research. Colorimetric measurement tasks are quite complex as the purpose of color measurement is to provide a quantitative evaluation of the phenomenon of colors as perceived by human vision. A proper colorimetric calibration of the color camera system is needed in order to make color a practical tool in machine vision. This paper discuses two approaches to color measurements based on a color camera and includes an overview of practical approaches to color camera calibration under unstable illumination conditions.

  18. Scatter correction in scintillation camera imaging of positron-emitting radionuclides

    SciTech Connect

    Ljungberg, M.; Danfelter, M.; Strand, S.E.

    1996-12-31

    The use of Anger scintillation cameras for positron SPECT has become of interest recently due to their use with imaging 2-{sup 18}F deoxyglucose. Due to the special crystal design (thin and wide), a significant amount of primary events will be also recorded in the Compton region of the energy spectra. Events recorded in a second Compton window (CW) can add information to the data in the photopeak window (PW), since some events are correctly positioned in the CW. However, a significant amount of the scatter is also included in CW which needs to be corrected. This work describes a method whereby a third scatter window (SW) is used to estimate the scatter distribution in the CW and the PW. The accuracy of estimation has been evaluated by Monte Carlo simulations in a homogeneous elliptical phantom for point and extended sources. Two examples of clinical application are also provided. Results from simulations show that essentially only scatter from the phantom is recorded between the 511 keV PW and 340 keV CW. Scatter projection data with a constant multiplier can estimate the scatter in the CW and PW, although the scatter distribution in SW corresponds better to the scatter distribution in the CW. The multiplier k for the CW varies significantly more with depth than it does for the PW. Clinical studies show an improvement in image quality when using scatter corrected combined PW and CW.

  19. The underwater camera calibration based on virtual camera lens distortion

    NASA Astrophysics Data System (ADS)

    Qin, Dahui; Mao, Ting; Cheng, Peng; Zhang, Zhiliang

    2011-08-01

    The machine view is becoming more and more popular in underwater. It is a challenge to calibrate the camera underwater because of the complicated light ray path in underwater and air environment. In this paper we firstly analyzed characteristic of the camera when light transported from air to water. Then we proposed a new method that takes the high-level camera distortion model to compensate the deviation of the light refraction when light ray come through the water and air media. In the end experience result shows the high-level distortion model can simulate the effect made by the underwater light refraction which also makes effect on the camera's image in the process of the camera underwater calibration.

  20. Dynamic positron computed tomography of the heart with a high sensitivity positron camera and nitrogen-13 ammonia

    SciTech Connect

    Tamaki, N.; Senda, M.; Yonekura, Y.; Saji, H.; Kodama, S.; Konishi, Y.; Ban, T.; Kambara, H.; Kawai, C.; Torizuka, K.

    1985-06-01

    Dynamic positron computed tomography (PCT) of the heart was performed with a high-sensitivity, whole-body multislice PCT device and (/sup 13/N)ammonia. Serial 15-sec dynamic study immediately after i.v. (/sup 13/N)ammonia injection showed blood pool of the ventricular cavities in the first scan and myocardial images from the third scan in normal cases. In patients with myocardial infarction and mitral valve disease, tracer washout from the lung and myocardial peak time tended to be longer, suggesting presence of pulmonary congestion. PCT delineated tracer retention in the dorsal part of the lung. Serial 5-min late dynamic study in nine cases showed gradual increase in myocardial activity for 30 min in all normal segments and 42% of infarct segments, while less than 13% activity increase was observed in 50% of infarct segments. Thus, serial dynamic PCT with (/sup 13/N)ammonia assessing tracer kinetics in the heart and lung is a valuable adjunct to the static myocardial perfusion imaging for evaluation of various cardiac disorders.

  1. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  2. Multimodal sensing-based camera applications

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  3. Van de Graaff based positron source production

    NASA Astrophysics Data System (ADS)

    Lund, Kasey Roy

    The anti-matter counterpart to the electron, the positron, can be used for a myriad of different scientific research projects to include materials research, energy storage, and deep space flight propulsion. Currently there is a demand for large numbers of positrons to aid in these mentioned research projects. There are different methods of producing and harvesting positrons but all require radioactive sources or large facilities. Positron beams produced by relatively small accelerators are attractive because they are easily shut down, and small accelerators are readily available. A 4MV Van de Graaff accelerator was used to induce the nuclear reaction 12C(d,n)13N in order to produce an intense beam of positrons. 13N is an isotope of nitrogen that decays with a 10 minute half life into 13C, a positron, and an electron neutrino. This radioactive gas is frozen onto a cryogenic freezer where it is then channeled to form an antimatter beam. The beam is then guided using axial magnetic fields into a superconducting magnet with a field strength up to 7 Tesla where it will be stored in a newly designed Micro-Penning-Malmberg trap. Several source geometries have been experimented on and found that a maximum antimatter beam with a positron flux of greater than 0.55x10 6 e+s-1 was achieved. This beam was produced using a solid rare gas moderator composed of krypton. Due to geometric restrictions on this set up, only 0.1-1.0% of the antimatter was being frozen to the desired locations. Simulations and preliminary experiments suggest that a new geometry, currently under testing, will produce a beam of 107 e+s-1 or more.

  4. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  5. A slanting light-guide analog decoding high resolution detector for positron emission tomography camera

    SciTech Connect

    Wong, W.H.; Jing, M.; Bendriem, B.; Hartz, R.; Mullani, N.; Gould, K.L.; Michel, C.

    1987-02-01

    Current high resolution PET cameras require the scintillation crystals to be much narrower than the smallest available photomultipliers. In addition, the large number of photomultiplier channels constitutes the major component cost in the camera. Recent new designs use the Anger camera type of analog decoding method to obtain higher resolution and lower cost by using the relatively large photomultipliers. An alternative approach to improve the resolution and cost factors has been proposed, with a system of slanting light-guides between the scintillators and the photomultipliers. In the Anger camera schemes, the scintillation light is distributed to several neighboring photomultipliers which then determine the scintillation location. In the slanting light-guide design, the scintillation is metered and channeled to only two photomultipliers for the decision making. This paper presents the feasibility and performance achievable with the slanting light-guide detectors. With a crystal/photomultiplier ratio of 6/1, the intrinsic resolution was found to be 4.0 mm using the first non-optimized prototype light-guides on BGO crystals. The axial resolution will be about 5-6 mm.

  6. Formation mechanisms and optimization of trap-based positron beams

    NASA Astrophysics Data System (ADS)

    Natisin, M. R.; Danielson, J. R.; Surko, C. M.

    2016-02-01

    Described here are simulations of pulsed, magnetically guided positron beams formed by ejection from Penning-Malmberg-style traps. In a previous paper [M. R. Natisin et al., Phys. Plasmas 22, 033501 (2015)], simulations were developed and used to describe the operation of an existing trap-based beam system and provided good agreement with experimental measurements. These techniques are used here to study the processes underlying beam formation in more detail and under more general conditions, therefore further optimizing system design. The focus is on low-energy beams (˜eV) with the lowest possible spread in energies (<10 meV), while maintaining microsecond pulse durations. The simulations begin with positrons trapped within a potential well and subsequently ejected by raising the bottom of the trapping well, forcing the particles over an end-gate potential barrier. Under typical conditions, the beam formation process is intrinsically dynamical, with the positron dynamics near the well lip, just before ejection, particularly crucial to setting beam quality. In addition to an investigation of the effects of beam formation on beam quality under typical conditions, two other regimes are discussed; one occurring at low positron temperatures in which significantly lower energy and temporal spreads may be obtained, and a second in cases where the positrons are ejected on time scales significantly faster than the axial bounce time, which results in the ejection process being essentially non-dynamical.

  7. Recent progress in tailoring trap-based positron beams

    SciTech Connect

    Natisin, M. R.; Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2013-03-19

    Recent progress is described to implement two approaches to specially tailor trap-based positron beams. Experiments and simulations are presented to understand the limits on the energy spread and pulse duration of positron beams extracted from a Penning-Malmberg (PM) trap after the particles have been buffer-gas cooled (or heated) in the range of temperatures 1000 {>=} T {>=} 300 K. These simulations are also used to predict beam performance for cryogenically cooled positrons. Experiments and simulations are also presented to understand the properties of beams formed when plasmas are tailored in a PM trap in a 5 tesla magnetic field, then non-adiabatically extracted from the field using a specially designed high-permeability grid to create a new class of electrostatically guided beams.

  8. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-08-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other together with a short-lived one are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 200 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  9. Contrail study with ground-based cameras

    NASA Astrophysics Data System (ADS)

    Schumann, U.; Hempel, R.; Flentje, H.; Garhammer, M.; Graf, K.; Kox, S.; Lösslein, H.; Mayer, B.

    2013-12-01

    Photogrammetric methods and analysis results for contrails observed with wide-angle cameras are described. Four cameras of two different types (view angle < 90° or whole-sky imager) at the ground at various positions are used to track contrails and to derive their altitude, width, and horizontal speed. Camera models for both types are described to derive the observation angles for given image coordinates and their inverse. The models are calibrated with sightings of the Sun, the Moon and a few bright stars. The methods are applied and tested in a case study. Four persistent contrails crossing each other, together with a short-lived one, are observed with the cameras. Vertical and horizontal positions of the contrails are determined from the camera images to an accuracy of better than 230 m and horizontal speed to 0.2 m s-1. With this information, the aircraft causing the contrails are identified by comparison to traffic waypoint data. The observations are compared with synthetic camera pictures of contrails simulated with the contrail prediction model CoCiP, a Lagrangian model using air traffic movement data and numerical weather prediction (NWP) data as input. The results provide tests for the NWP and contrail models. The cameras show spreading and thickening contrails, suggesting ice-supersaturation in the ambient air. The ice-supersaturated layer is found thicker and more humid in this case than predicted by the NWP model used. The simulated and observed contrail positions agree up to differences caused by uncertain wind data. The contrail widths, which depend on wake vortex spreading, ambient shear and turbulence, were partly wider than simulated.

  10. 3-D target-based distributed smart camera network localization.

    PubMed

    Kassebaum, John; Bulusu, Nirupama; Feng, Wu-Chi

    2010-10-01

    For distributed smart camera networks to perform vision-based tasks such as subject recognition and tracking, every camera's position and orientation relative to a single 3-D coordinate frame must be accurately determined. In this paper, we present a new camera network localization solution that requires successively showing a 3-D feature point-rich target to all cameras, then using the known geometry of a 3-D target, cameras estimate and decompose projection matrices to compute their position and orientation relative to the coordinatization of the 3-D target's feature points. As each 3-D target position establishes a distinct coordinate frame, cameras that view more than one 3-D target position compute translations and rotations relating different positions' coordinate frames and share the transform data with neighbors to facilitate realignment of all cameras to a single coordinate frame. Compared to other localization solutions that use opportunistically found visual data, our solution is more suitable to battery-powered, processing-constrained camera networks because it requires communication only to determine simultaneous target viewings and for passing transform data. Additionally, our solution requires only pairwise view overlaps of sufficient size to see the 3-D target and detect its feature points, while also giving camera positions in meaningful units. We evaluate our algorithm in both real and simulated smart camera networks. In the real network, position error is less than 1 ('') when the 3-D target's feature points fill only 2.9% of the frame area. PMID:20679031

  11. Conceptual design of an intense positron source based on an LIA

    NASA Astrophysics Data System (ADS)

    Long, Ji-Dong; Yang, Zhen; Dong, Pan; Shi, Jin-Shui

    2012-04-01

    Accelerator based positron sources are widely used due to their high intensity. Most of these accelerators are RF accelerators. An LIA (linear induction accelerator) is a kind of high current pulsed accelerator used for radiography. A conceptual design of an intense pulsed positron source based on an LIA is presented in the paper. One advantage of an LIA is its pulsed power being higher than conventional accelerators, which means a higher amount of primary electrons for positron generations per pulse. Another advantage of an LIA is that it is very suitable to decelerate the positron bunch generated by bremsstrahlung pair process due to its ability to adjustably shape the voltage pulse. By implementing LIA cavities to decelerate the positron bunch before it is moderated, the positron yield could be greatly increased. These features may make the LIA based positron source become a high intensity pulsed positron source.

  12. Infrared camera based on a curved retina.

    PubMed

    Dumas, Delphine; Fendler, Manuel; Berger, Frédéric; Cloix, Baptiste; Pornin, Cyrille; Baier, Nicolas; Druart, Guillaume; Primot, Jérôme; le Coarer, Etienne

    2012-02-15

    Design of miniature and light cameras requires an optical design breakthrough to achieve good optical performance. Solutions inspired by animals' eyes are the most promising. The curvature of the retina offers several advantages, such as uniform intensity and no field curvature, but this feature is not used. The work presented here is a solution to spherically bend monolithic IR detectors. Compared to state-of-the-art methods, a higher fill factor is obtained and the device fabrication process is not modified. We made an IR eye camera with a single lens and a curved IR bolometer. Images captured are well resolved and have good contrast, and the modulation transfer function shows better quality when comparing with planar systems. PMID:22344137

  13. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  14. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  15. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  16. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    NASA Astrophysics Data System (ADS)

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-05-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments.

  17. Spectral Camera based on Ghost Imaging via Sparsity Constraints.

    PubMed

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  18. Positron annihilation in cardo-based polymer membranes.

    PubMed

    Kobayashi, Y; Kazama, Shingo; Inoue, K; Toyama, T; Nagai, Y; Haraya, K; Mohamed, Hamdy F M; O'Rouke, B E; Oshima, N; Kinomura, A; Suzuki, R

    2014-06-01

    Positron annihilation lifetime spectroscopy (PALS) is applied to a series of bis(aniline)fluorene and bis(xylidine)fluorene-based cardo polyimide and bis(phenol)fluorene-based polysulfone membranes. It was found that favorable amounts of positronium (Ps, the positron-electron bound state) form in cardo polyimides with the 2,2-bis(3,4-dicarboxyphenyl) hexafluoropropane dianhydride (6FDA) moiety and bis(phenol)fluorene-based cardo polysulfone, but no Ps forms in most of the polyimides with pyromellitic dianhydride (PMDA) and 3,3',4,4'-biphenyltetracarboxylic dianhydride (BTDA) moieties. A bis(xylidine)fluorene-based polyimide membrane containing PMDA and BTDA moieties exhibits a little Ps formation but the ortho-positronium (o-Ps, the triplet state of Ps) lifetime of this membrane anomalously shortens with increasing temperature, which we attribute to chemical reaction of o-Ps. Correlation between the hole size (V(h)) deduced from the o-Ps lifetime and diffusion coefficients of O2 and N2 for polyimides with the 6FDA moiety and cardo polysulfone showing favorable Ps formation is discussed based on free volume theory of gas diffusion. It is suggested that o-Ps has a strong tendency to probe larger holes in rigid chain polymers with wide hole size distributions such as those containing cardo moieties, resulting in deviations from the previously reported correlations for common polymers such as polystyrene, polycarbonate, polysulfone, and so forth. PMID:24815092

  19. Camera array based light field microscopy.

    PubMed

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-09-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  20. Camera array based light field microscopy

    PubMed Central

    Lin, Xing; Wu, Jiamin; Zheng, Guoan; Dai, Qionghai

    2015-01-01

    This paper proposes a novel approach for high-resolution light field microscopy imaging by using a camera array. In this approach, we apply a two-stage relay system for expanding the aperture plane of the microscope into the size of an imaging lens array, and utilize a sensor array for acquiring different sub-apertures images formed by corresponding imaging lenses. By combining the rectified and synchronized images from 5 × 5 viewpoints with our prototype system, we successfully recovered color light field videos for various fast-moving microscopic specimens with a spatial resolution of 0.79 megapixels at 30 frames per second, corresponding to an unprecedented data throughput of 562.5 MB/s for light field microscopy. We also demonstrated the use of the reported platform for different applications, including post-capture refocusing, phase reconstruction, 3D imaging, and optical metrology. PMID:26417490

  1. Exploring positron characteristics utilizing two new positron-electron correlation schemes based on multiple electronic structure calculation methods

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Shuai; Gu, Bing-Chuan; Han, Xiao-Xi; Liu, Jian-Dang; Ye, Bang-Jiao

    2015-10-01

    We make a gradient correction to a new local density approximation form of positron-electron correlation. The positron lifetimes and affinities are then probed by using these two approximation forms based on three electronic-structure calculation methods, including the full-potential linearized augmented plane wave (FLAPW) plus local orbitals approach, the atomic superposition (ATSUP) approach, and the projector augmented wave (PAW) approach. The differences between calculated lifetimes using the FLAPW and ATSUP methods are clearly interpreted in the view of positron and electron transfers. We further find that a well-implemented PAW method can give near-perfect agreement on both the positron lifetimes and affinities with the FLAPW method, and the competitiveness of the ATSUP method against the FLAPW/PAW method is reduced within the best calculations. By comparing with the experimental data, the new introduced gradient corrected correlation form is proved to be competitive for positron lifetime and affinity calculations. Project supported by the National Natural Science Foundation of China (Grant Nos. 11175171 and 11105139).

  2. A new scheme to accumulate positrons in a Penning-Malmberg trap with a linac-based positron pulsed source

    SciTech Connect

    Dupre, P.

    2013-03-19

    The Gravitational Behaviour of Antimatter at Rest experiment (GBAR) is designed to perform a direct measurement of the weak equivalence principle on antimatter by measuring the acceleration of anti-hydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium (Ps) cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. The Ps target will be produced by a pulse of few 10{sup 10} positrons injected onto a positron-positronium converter. For this purpose, a slow positron source using an electron Linac has been constructed at Saclay. The present flux is comparable with that of {sup 22}Na-based sources using solid neon moderator. A new positron accumulation scheme with a Penning-Malmberg trap has been proposed taking advantage of the pulsed time structure of the beam. In the trap, the positrons are cooled by interaction with a dense electron plasma. The overall trapping efficiency has been estimated to be {approx}70% by numerical simulations.

  3. Triangulation-Based Camera Calibration For Machine Vision Systems

    NASA Astrophysics Data System (ADS)

    Bachnak, Rafic A.; Celenk, Mehmet

    1990-04-01

    This paper describes a camera calibration procedure for stereo-based machine vision systems. The method is based on geometric triangulation using only a single image of three distinctive points. Both the intrinsic and extrinsic parameters of the system are determined. The procedure is performed only once at the initial set-up using a simple camera model. The effective focal length is extended in such a way that a linear transformation exists between the camera image plane and the output digital image. Only three world points are needed to find the extended focal length and the transformation matrix elements that relates the camera position and orientation to a real world coordinate system. The parameters of the system are computed by solving a set of linear equations. Experimental results show that the method, when used in a stereo system developed in this research, produces reasonably accurate 3-D measurements.

  4. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  5. Camera self-calibration method based on two vanishing points

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Xu, Mengmeng; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Liang, Erjun; Liu, Xiaomin

    2015-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.

  6. Design analysis and performance evaluation of a two-dimensional camera for accelerated positron-emitter beam injection by computer simulation

    SciTech Connect

    Llacer, J.; Chatterjee, A.; Batho, E.K.; Poskanzer, J.A.

    1982-05-01

    The characteristics and design of a high-accuracy and high-sensitivity 2-dimensional camera for the measurement of the end-point of the trajectory of accelerated heavy ion beams of positron emitter isotopes are described. Computer simulation methods have been used in order to insure that the design would meet the demanding criteria of ability to obtain the location of the centroid of a point source in the X-Y plane with errors smaller than 1 mm, with an activity of 100 nanoCi, in a counting time of 5 sec or less. A computer program which can be developed into a general purpose analysis tool for a large number of positron emitter camera configurations is described in its essential parts. The validation of basic simulation results with simple measurements is reported, and the use of the program to generate simulated images which include important second order effects due to detector material, geometry, septa, etc. is demonstrated. Comparison between simulated images and initial results with the completed instrument shows that the desired specifications have been met.

  7. A cooperative control algorithm for camera based observational systems.

    SciTech Connect

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  8. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  9. Optimization of drift bias in an UHV based pulsed positron beam system

    SciTech Connect

    Anto, C. Varghese; Rajaraman, R.; Rao, G. Venugopal; Abhaya, S.; Parimala, J.; Amarendra, G.

    2012-06-05

    We report here the design of ultra high vacuum (UHV) compatible pulsed positron beam lifetime system, which combines the principles of a conventional slow positron beam and RF based pulsing scheme. The mechanical design and construction of the UHV system to house the beam has been completed and it has been tested for a vacuum of {approx} 10{sup -10} mbar. The voltages applied to the drift tube as a function of positron energies have been optimized using SIMION.

  10. Design of microcontroller based system for automation of streak camera

    SciTech Connect

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  11. Design of microcontroller based system for automation of streak camera

    NASA Astrophysics Data System (ADS)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  12. Embedded design based virtual instrument program for positron beam automation

    NASA Astrophysics Data System (ADS)

    Jayapandian, J.; Gururaj, K.; Abhaya, S.; Parimala, J.; Amarendra, G.

    2008-10-01

    Automation of positron beam experiment with a single chip embedded design using a programmable system on chip (PSoC) which provides easy interfacing of the high-voltage DC power supply is reported. Virtual Instrument (VI) control program written in Visual Basic 6.0 ensures the following functions (i) adjusting of sample high voltage by interacting with the programmed PSoC hardware, (ii) control of personal computer (PC) based multi channel analyzer (MCA) card for energy spectroscopy, (iii) analysis of the obtained spectrum to extract the relevant line shape parameters, (iv) plotting of relevant parameters and (v) saving the file in the appropriate format. The present study highlights the hardware features of the PSoC hardware module as well as the control of MCA and other units through programming in Visual Basic.

  13. Extrinsic Calibration of Camera Networks Based on Pedestrians.

    PubMed

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  14. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  15. A Robust Camera-Based Interface for Mobile Entertainment.

    PubMed

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  16. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  17. Image-based camera motion estimation using prior probabilities

    NASA Astrophysics Data System (ADS)

    Sargent, Dusty; Park, Sun Young; Spofford, Inbar; Vosburgh, Kirby

    2011-03-01

    Image-based camera motion estimation from video or still images is a difficult problem in the field of computer vision. Many algorithms have been proposed for estimating intrinsic camera parameters, detecting and matching features between images, calculating extrinsic camera parameters based on those features, and optimizing the recovered parameters with nonlinear methods. These steps in the camera motion inference process all face challenges in practical applications: locating distinctive features can be difficult in many types of scenes given the limited capabilities of current feature detectors, camera motion inference can easily fail in the presence of noise and outliers in the matched features, and the error surfaces in optimization typically contain many suboptimal local minima. The problems faced by these techniques are compounded when they are applied to medical video captured by an endoscope, which presents further challenges such as non-rigid scenery and severe barrel distortion of the images. In this paper, we study these problems and propose the use of prior probabilities to stabilize camera motion estimation for the application of computing endoscope motion sequences in colonoscopy. Colonoscopy presents a special case for camera motion estimation in which it is possible to characterize typical motion sequences of the endoscope. As the endoscope is restricted to move within a roughly tube-shaped structure, forward/backward motion is expected, with only small amounts of rotation and horizontal movement. We formulate a probabilistic model of endoscope motion by maneuvering an endoscope and attached magnetic tracker through a synthetic colon model and fitting a distribution to the observed motion of the magnetic tracker. This model enables us to estimate the probability of the current endoscope motion given previously observed motion in the sequence. We add these prior probabilities into the camera motion calculation as an additional penalty term in RANSAC

  18. Camera calibration approach based on adaptive active target

    NASA Astrophysics Data System (ADS)

    Zhang, Yalin; Zhou, Fuqiang; Deng, Peng

    2011-12-01

    Aiming at calibrating camera on site, where the lighting condition is hardly controlled and the quality of target images would be declined when the angle between camera and target changes, an adaptive active target is designed and the camera calibration approach based on the target is proposed. The active adaptive target in which LEDs are embedded is flat, providing active feature point. Therefore the brightness of the feature point can be modified via adjusting the electricity, judging from the threshold of image feature criteria. In order to extract features of the image accurately, the concept of subpixel-precise thresholding is also proposed. It converts the discrete representation of the digital image to continuous function by bilinear interpolation, and the sub-pixel contours are acquired by the intersection of the continuous function and the appropriate selection of threshold. According to analysis of the relationship between the features of the image and the brightness of the target, the area ratio of convex hulls and the grey value variance are adopted as the criteria. Result of experiments revealed that the adaptive active target accommodates well to the changing of the illumination in the environment, the camera calibration approach based on adaptive active target can obtain high level of accuracy and fit perfectly for image targeting in various industrial sites.

  19. An Undulator Based Polarized Positron Source for CLIC

    SciTech Connect

    Liu, Wanming; Gai, Wei; Rinolfi, Louis; Sheppard, John; /SLAC

    2012-07-02

    A viable positron source scheme is proposed that uses circularly polarized gamma rays generated from the main 250 GeV electron beam. The beam passes through a helical superconducting undulator with a magnetic field of {approx} 1 Tesla and a period of 1.15 cm. The gamma-rays produced in the undulator in the energy range between {approx} 3 MeV - 100 MeV will be directed to a titanium target and produce polarized positrons. The positrons are then captured, accelerated and transported to a Pre-Damping Ring (PDR). Detailed parameter studies of this scheme including positron yield, and undulator parameter dependence are presented. Effects on the 250 GeV CLIC main beam, including emittance growth and energy loss from the beam passing through the undulator are also discussed.

  20. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  1. EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION FOR MTR CANAL. CAISSONS FLANK EACH SIDE. COUNTERFORT (SUPPORT PERPENDICULAR TO WHAT WILL BE THE LONG WALL OF THE CANAL) RESTS ATOP LEFT CAISSON. IN LOWER PART OF VIEW, DRILLERS PREPARE TRENCHES FOR SUPPORT BEAMS THAT WILL LIE BENEATH CANAL FLOOR. INL NEGATIVE NO. 739. Unknown Photographer, 10/6/1950 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  2. rf streak camera based ultrafast relativistic electron diffraction.

    PubMed

    Musumeci, P; Moody, J T; Scoby, C M; Gutierrez, M S; Tran, T

    2009-01-01

    We theoretically and experimentally investigate the possibility of using a rf streak camera to time resolve in a single shot structural changes at the sub-100 fs time scale via relativistic electron diffraction. We experimentally tested this novel concept at the UCLA Pegasus rf photoinjector. Time-resolved diffraction patterns from thin Al foil are recorded. Averaging over 50 shots is required in order to get statistics sufficient to uncover a variation in time of the diffraction patterns. In the absence of an external pump laser, this is explained as due to the energy chirp on the beam out of the electron gun. With further improvements to the electron source, rf streak camera based ultrafast electron diffraction has the potential to yield truly single shot measurements of ultrafast processes. PMID:19191429

  3. A novel compact gamma camera based on flat panel PMT

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Cinti, M. N.; Trotta, C.; Trotta, G.; Scafè, R.; Betti, M.; Cusanno, F.; Montani, Livia; Iurlaro, Giorgia; Garibaldi, F.; Del Guerra, A.

    2003-11-01

    Over the last ten years the strong technological advances in position sensitive detectors have encouraged the scientific community to develop dedicated imagers for new diagnostic techniques in the field of isotope functional imaging. The main feature of the new detectors is the compactness that allows suitable detection geometry fitting the body anatomy. Position sensitive photomultiplier tubes (PSPMTs) have been showing very good features with continuous improvement. In 1997 a novel gamma camera was proposed based on a closely packed array of second generation 1 in PSPMTs. The main advantage is the potentially unlimited detection area but with the disadvantage of a relatively large non-active area (30%). The Hamamatsu H8500 Flat Panel PMT represents the last generation of PSPMT. Its extreme compactness allows array assembly with an improved effective area up to 97%. This paper, evaluates the potential improvement of imaging performances of a gamma camera based on the new PSPMT, compared with the two previous generation PSPMTs. To this aim the factors affecting the gamma camera final response, like PSPMT gain anode variation and position resolution, are analyzed and related to the uniformity counting response, energy resolution, position linearity, detection efficiency and intrinsic spatial resolution. The results show that uniformity of pulse height response seems to be the main parameter that provides the best imaging performances. Furthermore an extreme identification of pixels seems to be not effective to a full correction of image uniformity counting and gain response. However, considering the present technological limits, Flat Panel PSPMTs could be the best trade off between gamma camera imaging performances, compactness and large detection area.

  4. Observation of Polarized Positrons from an Undulator-Based Source

    SciTech Connect

    Alexander, G; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K.T.; Mikhailichenko, A.A.; Moortgat-Pick, G.A.; Pahl, P.; /Tel Aviv U. /Cornell U., Phys. Dept. /SLAC /Tennessee U. /Humboldt U., Berlin /DESY /Yerevan Phys. Inst. /Aachen, Tech. Hochsch. /DESY, Zeuthen /Princeton U. /Durham U. /Daresbury

    2008-03-06

    An experiment (E166) at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with Geant4 simulations that include the dominant polarization-dependent interactions of electrons, positrons and photons in matter.

  5. Performance of the (n,{gamma})-Based Positron Beam Facility NEPOMUC

    SciTech Connect

    Schreckenbach, K.; Hugenschmidt, C.; Piochacz, C.; Stadlbauer, M.; Loewe, B.; Maier, J.; Pikart, P.

    2009-01-28

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.10{sup 8} e{sup +}/s is achieved (1 keV). The performance and applications of the facility are presented.

  6. Performance of the (n,γ)-Based Positron Beam Facility NEPOMUC

    NASA Astrophysics Data System (ADS)

    Schreckenbach, K.; Hugenschmidt, C.; Löwe, B.; Maier, J.; Pikart, P.; Piochacz, C.; Stadlbauer, M.

    2009-01-01

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.108 e+/s is achieved (1 keV). The performance and applications of the facility are presented.

  7. Camera-based independent couch height verification in radiation oncology.

    PubMed

    Kusters, Martijn; Louwe, Rob; Biemans-van Kastel, Liesbeth; Nieuwenkamp, Henk; Zahradnik, Rien; Claessen, Roy; van Seters, Ronald; Huizenga, Henk

    2015-01-01

    For specific radiation therapy (RT) treatments, it is advantageous to use the isocenter-to-couch distance (ICD) for initial patient setup.(1) Since sagging of the treatment couch is not properly taken into account by the electronic readout of the treatment machine, this readout cannot be used for initial patient positioning using the isocenter-to-couch distance (ICD). Therefore, initial patient positioning to the prescribed ICD has been carried out using a ruler prior to each treatment fraction in our institution. However, the ruler method is laborious and logging of data is not possible. The objective of this study is to replace the ruler-based setup of the couch height with an independent, user-friendly, optical camera-based method whereby the radiation technologists have to move only the couch to the correct couch height, which is visible on a display. A camera-based independent couch height measurement system (ICHS) was developed in cooperation with Panasonic Electric Works Western Europe. Clinical data showed that the ICHS is at least as accurate as the application of a ruler to verify the ICD. The camera-based independent couch height measurement system has been successfully implemented in seven treatment rooms, since 10 September 2012. The benefits of this system are a more streamlined workflow, reduction of human errors during initial patient setup, and logging of the actual couch height at the isocenter. Daily QA shows that the systems are stable and operate within the set 1 mm tolerance. Regular QA of the system is necessary to guarantee that the system works correctly. PMID:26699308

  8. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  9. Fast background subtraction for moving cameras based on nonparametric models

    NASA Astrophysics Data System (ADS)

    Sun, Feng; Qin, Kaihuai; Sun, Wei; Guo, Huayuan

    2016-05-01

    In this paper, a fast background subtraction algorithm for freely moving cameras is presented. A nonparametric sample consensus model is employed as the appearance background model. The as-similar-as-possible warping technique, which obtains multiple homographies for different regions of the frame, is introduced to robustly estimate and compensate the camera motion between the consecutive frames. Unlike previous methods, our algorithm does not need any preprocess step for computing the dense optical flow or point trajectories. Instead, a superpixel-based seeded region growing scheme is proposed to extend the motion cue based on the sparse optical flow to the entire image. Then, a superpixel-based temporal coherent Markov random field optimization framework is built on the raw segmentations from the background model and the motion cue, and the final background/foreground labels are obtained using the graph-cut algorithm. Extensive experimental evaluations show that our algorithm achieves satisfactory accuracy, while being much faster than the state-of-the-art competing methods.

  10. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  11. Bioimpedance-based respiratory gating method for oncologic positron emission tomography (PET) imaging with first clinical results

    NASA Astrophysics Data System (ADS)

    Koivumäki, T.; Vauhkonen, M.; Teuho, J.; Teräs, M.; Hakulinen, M. A.

    2013-04-01

    Respiratory motion may cause significant image artefacts in positron emission tomography/computed tomography (PET/CT) imaging. This study introduces a new bioimpedance-based gating method for minimizing respiratory artefacts. The method was studied in 12 oncologic patients by evaluating the following three parameters: maximum metabolic activity of radiopharmaceutical accumulations, the size of these targets as well as their target-to-background ratio. The bioimpedance-gated images were compared with non-gated images and images that were gated with a reference method, chest wall motion monitoring by infrared camera. The bioimpedance method showed clear improvement as increased metabolic activity and decreased target volume compared to non-gated images and produced consistent results with the reference method. Thus, the method may have great potential in the future of respiratory gating in nuclear medicine imaging.

  12. Positron annihilation in neutron irradiated iron-based materials

    NASA Astrophysics Data System (ADS)

    Lambrecht, M.; Almazouzi, A.

    2011-01-01

    The hardening and embrittlement of reactor pressure vessel steels is of great concern in the actual nuclear power plant life assessment. This embrittlement is caused by irradiation-induced damage, like vacancies, interstitials, solutes and their clusters. But the reason for the embrittlement of the material is not yet totally known. The real nature of the irradiation damage should thus be examined as well as its evolution in time. Positron annihilation spectroscopy has been shown to be a powerful method for analyzing some of these defects. In fact, both vacancy type clusters and precipitates can be visualized by positrons. Recently, at SCK·CEN, a new setup has been constructed, calibrated and optimized to measure the coincidence Doppler broadening and lifetime of neutron irradiated materials. To be able to compare the results obtained by the positron studies, with those of other techniques (such as transmission electron microscopy, atom probe tomography and small angle neutron scattering), quantitative estimations of the size and density of the annihilation sites are needed. Using the approach proposed by Vehanen et al., an attempt is made to calculate the needed quantities in Fe and Fe-Cu binary alloys that were neutron irradiated to different doses. The results obtained are discussed highlighting the difficulties in defining the annihilation centres even in these simple model alloys, in spite of using both lifetime and Doppler broadening measurements in the same samples.

  13. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, C.‰line; Gosselin, Bernard

    2005-01-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  14. Color binarization for complex camera-based images

    NASA Astrophysics Data System (ADS)

    Thillou, Céline; Gosselin, Bernard

    2004-12-01

    This paper describes a new automatic color thresholding based on wavelet denoising and color clustering with K-means in order to segment text information in a camera-based image. Several parameters bring different information and this paper tries to explain how to use this complementarity. It is mainly based on the discrimination between two kinds of backgrounds: clean or complex. On one hand, this separation is useful to apply a particular algorithm on each of these cases and on the other hand to decrease the computation time for clean cases for which a faster method could be considered. Finally, several experiments were done to discuss results and to conclude that the use of a discrimination between kinds of backgrounds gives better results in terms of Precision and Recall.

  15. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  16. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure. PMID:20876019

  17. Whole blood glucose analysis based on smartphone camera module

    NASA Astrophysics Data System (ADS)

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110-586 mg/dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis.

  18. Whole blood glucose analysis based on smartphone camera module.

    PubMed

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110–586 mg∕dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis. PMID:26524683

  19. Improvement of digital photoelasticity based on camera response function.

    PubMed

    Chang, Shih-Hsin; Wu, Hsien-Huang P

    2011-09-20

    Studies on photoelasticity have been conducted by many researchers in recent years, and many equations for photoelastic analysis based on digital images were proposed. While these equations were all presented by the light intensity emitted from the analyzer, pixel values of the digital image were actually used in the real calculations. In this paper, a proposal of using relative light intensity obtained by the camera response function to replace the pixel value for photoelastic analysis was investigated. Generation of isochromatic images based on relative light intensity and pixel value were compared to evaluate the effectiveness of the new approach. The results showed that when relative light intensity was used, the quality of an isochromatic image can be greatly improved both visually and quantitatively. We believe that the technique proposed in this paper can also be used to improve the performance for the other types of photoelastic analysis using digital images. PMID:21947044

  20. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  1. Conceptual design of a slow positron source based on a magnetic trap

    NASA Astrophysics Data System (ADS)

    Volosov, V. I.; Meshkov, O. I.; Mezentsev, N. A.

    2001-09-01

    A unique 10.3 T superconducting wiggler was designed and manufactured at BINP SB RAS. The installation of this wiggler in the SPring-8 storage ring provides a possibility to generate a high-intensity beam of photons (SR) with energy above 1 MeV (Ando et al., J. Synchrotron Radiat. 5 (1998) 360). Conversion of photons to positrons on high- Z material (tungsten) targets creates an integrated positron flux more than 10 13 particles per second. The energy spectrum of the positrons has a maximum at 0.5 MeV and the half-width about 1 MeV (Plokhoi et al., Jpn. J. Appl. Phys. 38 (1999) 604). The traditional methods of positron moderation have the efficiency ɛ= Ns/ Nf of 10 -4 (metallic moderators) to 10 -2 (solid rare gas moderators) (Mills and Gullikson, Appl. Phys. Lett. 49 (1986) 1121). The high flux of primary positrons restricts the choice to a tungsten moderator that has ɛ≈10 -4only (Schultz, Nuc. Instr. and Meth. B 30 (1988) 94). The aim of our project is to obtain the moderation efficiency ɛ⩾10 -1. We propose to moderate the positrons inside a multi-stage magnetic trap based on several (3-6) electromagnetic traps that are connected in series. Magnetic field of the traps grows consecutively from stage to stage. We propose to release the positrons from the converter with the use of an additional relativistic electron beam passing in synchronism with the SR pulse in the vicinity of the converter. The average electron beam energy and current are 1-2 MeV and 100 mA, respectively. The electrical field of the beam is high enough to distort the positron paths by an amount comparable with the Larmor radius. The further drift of the positrons to the trap axis will occur due to the strengthening of the magnetic field. The magnetic field amplitude of adjacent traps varies in time in the antiphase and increases from 0.9 T in the first stage to 6 T in the last one. The positron transition from stage to stage takes place at the moment of the field equalization. The removal

  2. Formation of buffer-gas-trap based positron beams

    SciTech Connect

    Natisin, M. R. Danielson, J. R. Surko, C. M.

    2015-03-15

    Presented here are experimental measurements, analytic expressions, and simulation results for pulsed, magnetically guided positron beams formed using a Penning-Malmberg style buffer gas trap. In the relevant limit, particle motion can be separated into motion along the magnetic field and gyro-motion in the plane perpendicular to the field. Analytic expressions are developed which describe the evolution of the beam energy distributions, both parallel and perpendicular to the magnetic field, as the beam propagates through regions of varying magnetic field. Simulations of the beam formation process are presented, with the parameters chosen to accurately replicate experimental conditions. The initial conditions and ejection parameters are varied systematically in both experiment and simulation, allowing the relevant processes involved in beam formation to be explored. These studies provide new insights into the underlying physics, including significant adiabatic cooling, due to the time-dependent beam-formation potential. Methods to improve the beam energy and temporal resolution are discussed.

  3. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    NASA Astrophysics Data System (ADS)

    Muranaka, T.; Debu, P.; Dupré, P.; Liszkay, L.; Mansoulie, B.; Pérez, P.; Rey, J. M.; Ruiz, N.; Sacquin, Y.; Crivelli, P.; Gendotti, U.; Rubbia, A.

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5·1011 per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  4. A trap-based pulsed positron beam optimised for positronium laser spectroscopy.

    PubMed

    Cooper, B S; Alonso, A M; Deller, A; Wall, T E; Cassidy, D B

    2015-10-01

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10(5) positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10(7) cm(-3) is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields. PMID:26520934

  5. A trap-based pulsed positron beam optimised for positronium laser spectroscopy

    SciTech Connect

    Cooper, B. S. Alonso, A. M.; Deller, A.; Wall, T. E.; Cassidy, D. B.

    2015-10-15

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10{sup 5} positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10{sup 7} cm{sup −3} is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields.

  6. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  7. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  8. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly. PMID:25710087

  9. Camera calibration based on the back projection process

    NASA Astrophysics Data System (ADS)

    Gu, Feifei; Zhao, Hong; Ma, Yueyang; Bu, Penghui

    2015-12-01

    Camera calibration plays a crucial role in 3D measurement tasks of machine vision. In typical calibration processes, camera parameters are iteratively optimized in the forward imaging process (FIP). However, the results can only guarantee the minimum of 2D projection errors on the image plane, but not the minimum of 3D reconstruction errors. In this paper, we propose a universal method for camera calibration, which uses the back projection process (BPP). In our method, a forward projection model is used to obtain initial intrinsic and extrinsic parameters with a popular planar checkerboard pattern. Then, the extracted image points are projected back into 3D space and compared with the ideal point coordinates. Finally, the estimation of the camera parameters is refined by a non-linear function minimization process. The proposed method can obtain a more accurate calibration result, which is more physically useful. Simulation and practical data are given to demonstrate the accuracy of the proposed method.

  10. The System Design, Engineering Architecture, and Preliminary Results of a Lower-Cost High-Sensitivity High-Resolution Positron Emission Mammography Camera.

    PubMed

    Zhang, Yuxuan; Ramirez, Rocio A; Li, Hongdi; Liu, Shitao; An, Shaohui; Wang, Chao; Baghaei, Hossain; Wong, Wai-Hoi

    2010-02-01

    A lower-cost high-sensitivity high-resolution positron emission mammography (PEM) camera is developed. It consists of two detector modules with the planar detector bank of 20 × 12 cm(2). Each bank has 60 low-cost PMT-Quadrant-Sharing (PQS) LYSO blocks arranged in a 10 × 6 array with two types of geometries. One is the symmetric 19.36 × 19.36 mm(2) block made of 1.5 × 1.5 × 10 mm(3) crystals in a 12 × 12 array. The other is the 19.36 × 26.05 mm(2) asymmetric block made of 1.5 × 1.9 × 10 mm(3) crystals in 12 × 13 array. One row (10) of the elongated blocks are used along one side of the bank to reclaim the half empty PMT photocathode in the regular PQS design to reduce the dead area at the edge of the module. The bank has a high overall crystal packing fraction of 88%, which results in a very high sensitivity. Mechanical design and electronics have been developed for low-cost, compactness, and stability purposes. Each module has four Anger-HYPER decoding electronics that can handle a count-rate of 3 Mcps for single events. A simple two-module coincidence board with a hardware delay window for random coincidences has been developed with an adjustable window of 6 to 15 ns. Some of the performance parameters have been studied by preliminary tests and Monte Carlo simulations, including the crystal decoding map and the 17% energy resolution of the detectors, the point source sensitivity of 11.5% with 50 mm bank-to-bank distance, the 1.2 mm-spatial resolutions, 42 kcps peak Noise Equivalent Count Rate at 7.0-mCi total activity in human body, and the resolution phantom images. Those results show that the design goal of building a lower-cost, high-sensitivity, high-resolution PEM detector is achieved. PMID:20485539

  11. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  12. A bionic camera-based polarization navigation sensor.

    PubMed

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  13. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  14. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  15. Positron source investigation by using CLIC drive beam for Linac-LHC based e+p collider

    NASA Astrophysics Data System (ADS)

    Arιkan, Ertan; Aksakal, Hüsnü

    2012-08-01

    Three different methods which are alternately conventional, Compton backscattering and Undulator based methods employed for the production of positrons. The positrons to be used for e+p collisions in a Linac-LHC (Large Hadron Collider) based collider have been studied. The number of produced positrons as a function of drive beam energy and optimum target thickness has been determined. Three different targets have been used as a source investigation which are W75-Ir25, W75-Ta25, and W75-Re25 for three methods. Estimated number of the positrons has been performed with FLUKA simulation code. Then, these produced positrons are used for following Adiabatic matching device (AMD) and capture efficiency is determined. Then e+p collider luminosity corresponding to the methods mentioned above have been calculated by CAIN code.

  16. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  17. Positron emission tomography displacement sensitivity: predicting binding potential change for positron emission tomography tracers based on their kinetic characteristics.

    PubMed

    Morris, Evan D; Yoder, Karmen K

    2007-03-01

    There is great interest in positron emission tomography (PET) as a noninvasive assay of fluctuations in synaptic neurotransmitter levels, but questions remain regarding the optimal choice of tracer for such a task. A mathematical method is proposed for predicting the utility of any PET tracer as a detector of changes in the concentration of an endogenous competitor via displacement of the tracer (a.k.a., its 'vulnerability' to competition). The method is based on earlier theoretical work by Endres and Carson and by the authors. A tracer-specific predictor, the PET Displacement Sensitivity (PDS), is calculated from compartmental model simulations of the uptake and retention of dopaminergic radiotracers in the presence of transient elevations of dopamine (DA). The PDS predicts the change in binding potential (DeltaBP) for a given change in receptor occupancy because of binding by the endogenous competitor. Simulations were performed using estimates of tracer kinetic parameters derived from the literature. For D(2)/D(3) tracers, the calculated PDS indices suggest a rank order for sensitivity to displacement by DA as follows: raclopride (highest sensitivity), followed by fallypride, FESP, FLB, NMSP, and epidepride (lowest). Although the PDS takes into account the affinity constant for the tracer at the binding site, its predictive value cannot be matched by either a single equilibrium constant, or by any one rate constant of the model. Values for DeltaBP have been derived from published studies that employed comparable displacement paradigms with amphetamine and a D(2)/D(3) tracer. The values are in good agreement with the PDS-predicted rank order of sensitivity to displacement. PMID:16788713

  18. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  19. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  20. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  1. A new depth measuring method for stereo camera based on converted relative extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Song, Xiaowei; Yang, Lei; Wu, Yuanzhao; Liu, Zhong

    2013-08-01

    This paper presents a new depth measuring method for the dual-view stereo camera based on the converted relative extrinsic parameters. The relative extrinsic parameters between left and right cameras, which obtained by the stereo camera calibration, can indicate the geometric relationships among the left principle point, right principle point and convergent point. Furthermore, the geometry which consists of the corresponding points and the object can be obtained by making conversion between the corresponding points and principle points. Therefore, the depth of the object can be calculated based on the obtained geometry. The correctness of the proposed method has been proved in 3ds Max, and the validity of the method has been verified on the binocular stereo system of flea2 cameras. We compared our experimental results with the popular RGB-D camera (e.g. Kinect). The comparison results show that our method is reliable and efficient, without epipolar rectification.

  2. Unified Camera Tamper Detection Based on Edge and Object Information

    PubMed Central

    Lee, Gil-beom; Lee, Myeong-jin; Lim, Jongtae

    2015-01-01

    In this paper, a novel camera tamper detection algorithm is proposed to detect three types of tamper attacks: covered, moved and defocused. The edge disappearance rate is defined in order to measure the amount of edge pixels that disappear in the current frame from the background frame while excluding edges in the foreground. Tamper attacks are detected if the difference between the edge disappearance rate and its temporal average is larger than an adaptive threshold reflecting the environmental conditions of the cameras. The performance of the proposed algorithm is evaluated for short video sequences with three types of tamper attacks and for 24-h video sequences without tamper attacks; the algorithm is shown to achieve acceptable levels of detection and false alarm rates for all types of tamper attacks in real environments. PMID:25946628

  3. Automatic camera calibration method based on dashed lines

    NASA Astrophysics Data System (ADS)

    Li, Xiuhua; Wang, Guoyou; Liu, Jianguo

    2013-10-01

    We present a new method for full-automatic calibration of traffic cameras using the end points on dashed lines. Our approach uses the improved RANSAC method with the help of pixels transverse projection to detect the dashed lines and end points on them. Then combining analysis of the geometric relationship between the camera and road coordinate systems, we construct a road model to fit the end points. Finally using two-dimension calibration method we can convert pixels in image to meters along the ground truth lane. On a large number of experiments exhibiting a variety of conditions, our approach performs well, achieving less than 5% error in measuring test lengths in all cases.

  4. Calibration Methods for a 3D Triangulation Based Camera

    NASA Astrophysics Data System (ADS)

    Schulz, Ulrike; Böhnke, Kay

    A sensor in a camera takes a gray level image (1536 x 512 pixels), which is reflected by a reference body. The reference body is illuminated by a linear laser line. This gray level image can be used for a 3D calibration. The following paper describes how a calibration program calculates the calibration factors. The calibration factors serve to determine the size of an unknown reference body.

  5. Inspection focus technology of space tridimensional mapping camera based on astigmatic method

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Zhang, Liping

    2010-10-01

    The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 μm.

  6. A Sparse Representation-Based Deployment Method for Optimizing the Observation Quality of Camera Networks

    PubMed Central

    Wang, Chang; Qi, Fei; Shi, Guangming; Wang, Xiaotian

    2013-01-01

    Deployment is a critical issue affecting the quality of service of camera networks. The deployment aims at adopting the least number of cameras to cover the whole scene, which may have obstacles to occlude the line of sight, with expected observation quality. This is generally formulated as a non-convex optimization problem, which is hard to solve in polynomial time. In this paper, we propose an efficient convex solution for deployment optimizing the observation quality based on a novel anisotropic sensing model of cameras, which provides a reliable measurement of the observation quality. The deployment is formulated as the selection of a subset of nodes from a redundant initial deployment with numerous cameras, which is an ℓ0 minimization problem. Then, we relax this non-convex optimization to a convex ℓ1 minimization employing the sparse representation. Therefore, the high quality deployment is efficiently obtained via convex optimization. Simulation results confirm the effectiveness of the proposed camera deployment algorithms. PMID:23989826

  7. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  8. An algorithm for computing extrinsic camera parameters for far-range photogrammetry based on essential matrix

    NASA Astrophysics Data System (ADS)

    Cai, Huimin; Li, Kejie; Liu, Meilian

    2010-11-01

    Far-range photogrammetry is widely used in the location determination problem in some dangerous situation. In this paper we discussed the camera calibration problem which can be used in outdoors. Location determination based on stereo vision sensors requires the knowledge of the camera parameters, such as camera position, orientation, lens distortion, focal length etc. with high precision. Most of the existed method of camera calibration is placing many land markers whose position is known accurately. But due to large distance and other practical problems we can not place the land markers with high precision. This paper shows that if we don't know the position of the land marker, we also can get the extrinsic camera parameters with essential matrix. The real parameters of the camera and the computed parameters of the camera give rise to the geometric error. We develop and present theoretical analysis of the geometric error and how to get the extrinsic camera parameters with high precision in large scale measurement. Experimental results of the project which is used to measure the drop point of a high speed object testify the method we proposed with high precision compared with traditional methods.

  9. Design of an infrared camera based aircraft detection system for laser guide star installations

    SciTech Connect

    Friedman, H.; Macintosh, B.

    1996-03-05

    There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.

  10. Study of a Coincidence Detector Using a Suspension of Superheated Superconducting Grains in a High Density Dielectric Matrix for Positron Emission Tomography and γ-γ Tagging

    NASA Astrophysics Data System (ADS)

    Bruère Dawson, R.; Maillard, J.; Maurel, G.; Parisi, J.; Silva, J.; Waysand, G.

    2006-01-01

    We demonstrate the feasibility of coincidence detectors based on superheated superconducting grains (SSG) in a high density dielectric matrix (HDDM) for two applications: 1) positron cameras for small animal imaging, where two diametrically opposite cells are simultaneously hit by 511 keV gammas; 2) tagging of γ-γ events in electron positron colliders.

  11. Microstructure Evaluation of Fe-BASED Amorphous Alloys Investigated by Doppler Broadening Positron Annihilation Technique

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Huang, Ping; Wang, Yuxin; Yan, Biao

    2013-07-01

    Microstructure of Fe-based amorphous and nanocrystalline soft magnetic alloy has been investigated by X-ray diffraction (XRD), transmission electronic microscopy (TEM) and Doppler broadening positron annihilation technique (PAT). Doppler broadening measurement reveals that amorphous alloys (Finemet, Type I) which can form a nanocrystalline phase have more defects (free volume) than alloys (Metglas, Type II) which cannot form this microstructure. XRD and TEM characterization indicates that the nanocrystallization of amorphous Finemet alloy occurs at 460°C, where nanocrystallites of α-Fe with an average grain size of a few nanometers are formed in an amorphous matrix. With increasing annealing temperature up to 500°C, the average grain size increases up to around 12 nm. During the annealing of Finemet alloy, it has been demonstrated that positron annihilates in quenched-in defect, crystalline nanophase and amorphous-nanocrystalline interfaces. The change of line shape parameter S with annealing temperature in Finemet alloy is mainly due to the structural relaxation, the pre-nucleation of Cu nucleus and the nanocrystallization of α-Fe(Si) phase during annealing. This study throws new insights into positron behavior in the nanocrystallization of metallic glasses, especially in the presence of single or multiple nanophases embedded in the amorphous matrix.

  12. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  13. Characterization of the CCD and CMOS cameras for grating-based phase-contrast tomography

    NASA Astrophysics Data System (ADS)

    Lytaev, Pavel; Hipp, Alexander; Lottermoser, Lars; Herzen, Julia; Greving, Imke; Khokhriakov, Igor; Meyer-Loges, Stephan; Plewka, Jörn; Burmester, Jörg; Caselle, Michele; Vogelgesang, Matthias; Chilingaryan, Suren; Kopmann, Andreas; Balzer, Matthias; Schreyer, Andreas; Beckmann, Felix

    2014-09-01

    In this article we present the quantitative characterization of CCD and CMOS sensors which are used at the experiments for microtomography operated by HZG at PETRA III at DESY in Hamburg, Germany. A standard commercial CCD camera is compared to a camera based on a CMOS sensor. This CMOS camera is modified for grating-based differential phase-contrast tomography. The main goal of the project is to quantify and to optimize the statistical parameters of this camera system. These key performance parameters such as readout noise, conversion gain and full-well capacity are used to define an optimized measurement for grating-based phase-contrast. First results will be shown.

  14. Camera-based curvature measurement of a large incandescent object

    NASA Astrophysics Data System (ADS)

    Ollikkala, Arttu V. H.; Kananen, Timo P.; Mäkynen, Anssi J.; Holappa, Markus

    2013-04-01

    The goal of this work was to implement a low-cost machine vision system to help the roller operator to estimate the amount of strip camber during the rolling process. The machine vision system composing of a single camera, a standard PC-computer and a LabVIEW written program using straightforward image analysis determines the magnitude and direction of camber and presents the results both in numerical and graphical form on the computer screen. The system was calibrated with LED set-up which was also used to validate the accuracy of the system by mimicking the strip curvatures. The validation showed that the maximum difference between the true and measured values was less than +/-4 mm (k=0.95) within the 22 meter long test pattern.

  15. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  16. A descriptive geometry based method for total and common cameras fields of view optimization

    NASA Astrophysics Data System (ADS)

    Salmane, H.; Ruichek, Y.; Khoudour, L.

    2011-07-01

    The presented work is conducted in the framework of the ANR-VTT PANsafer project (Towards a safer level crossing). One of the objectives of the project is to develop a video surveillance system that will be able to detect and recognize potential dangerous situation around level crossings. This paper addresses the problem of cameras positioning and orientation in order to view optimally monitored scenes. In general, adjusting cameras position and orientation is achieved experimentally and empirically by considering geometrical different configurations. This step requires a lot of time to adjust approximately the total and common fields of view of the cameras, especially when constrained environments, like level crossing environments, are considered. In order to simplify this task and to get more precise cameras positioning and orientation, we propose in this paper a method that optimizes automatically the total and common cameras fields with respect to the desired scene. Based on descriptive geometry, the method estimates the best cameras position and orientation by optimizing surfaces of 2D domains that are obtained by projecting/intersecting the field of view of each camera on/with horizontal and vertical planes. The proposed method is evaluated and tested to demonstrate its effectiveness.

  17. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  18. Metric Potential of a 3D Measurement System Based on Digital Compact Cameras

    PubMed Central

    Sanz-Ablanedo, Enoc; Rodríguez-Pérez, José Ramón; Arias-Sánchez, Pedro; Armesto, Julia

    2009-01-01

    This paper presents an optical measuring system based on low cost, high resolution digital cameras. Once the cameras are synchronised, the portable and adjustable system can be used to observe living beings, bodies in motion, or deformations of very different sizes. Each of the cameras has been modelled individually and studied with regard to the photogrammetric potential of the system. We have investigated the photogrammetric precision obtained from the crossing of rays, the repeatability of results, and the accuracy of the coordinates obtained. Systematic and random errors are identified in validity assessment of the definition of the precision of the system from crossing of rays or from marking residuals in images. The results have clearly demonstrated the capability of a low-cost multiple-camera system to measure with sub-millimetre precision. PMID:22408520

  19. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  20. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    NASA Astrophysics Data System (ADS)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  1. A Compton camera application for the GAMOS GEANT4-based framework

    NASA Astrophysics Data System (ADS)

    Harkness, L. J.; Arce, P.; Judson, D. S.; Boston, A. J.; Boston, H. C.; Cresswell, J. R.; Dormand, J.; Jones, M.; Nolan, P. J.; Sampson, J. A.; Scraggs, D. P.; Sweeney, A.; Lazarus, I.; Simpson, J.

    2012-04-01

    Compton camera systems can be used to image sources of gamma radiation in a variety of applications such as nuclear medicine, homeland security and nuclear decommissioning. To locate gamma-ray sources, a Compton camera employs electronic collimation, utilising Compton kinematics to reconstruct the paths of gamma rays which interact within the detectors. The main benefit of this technique is the ability to accurately identify and locate sources of gamma radiation within a wide field of view, vastly improving the efficiency and specificity over existing devices. Potential advantages of this imaging technique, along with advances in detector technology, have brought about a rapidly expanding area of research into the optimisation of Compton camera systems, which relies on significant input from Monte-Carlo simulations. In this paper, the functionality of a Compton camera application that has been integrated into GAMOS, the GEANT4-based Architecture for Medicine-Oriented Simulations, is described. The application simplifies the use of GEANT4 for Monte-Carlo investigations by employing a script based language and plug-in technology. To demonstrate the use of the Compton camera application, simulated data have been generated using the GAMOS application and acquired through experiment for a preliminary validation, using a Compton camera configured with double sided high purity germanium strip detectors. Energy spectra and reconstructed images for the data sets are presented.

  2. Multi-camera calibration based on openCV and multi-view registration

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-ming; Wan, Xiong; Zhang, Zhi-min; Leng, Bi-yan; Lou, Ning-ning; He, Shuai

    2010-10-01

    For multi-camera calibration systems, a method based on OpenCV and multi-view registration combining calibration algorithm is proposed. First of all, using a Zhang's calibration plate (8X8 chessboard diagram) and a number of cameras (with three industrial-grade CCD) to be 9 group images shooting from different angles, using OpenCV to calibrate the parameters fast in the camera. Secondly, based on the corresponding relationship between each camera view, the computation of the rotation matrix and translation matrix is formulated as a constrained optimization problem. According to the Kuhn-Tucker theorem and the properties on the derivative of the matrix-valued function, the formulae of rotation matrix and translation matrix are deduced by using singular value decomposition algorithm. Afterwards an iterative method is utilized to get the entire coordinate transformation of pair-wise views, thus the precise multi-view registration can be conveniently achieved and then can get the relative positions in them(the camera outside the parameters).Experimental results show that the method is practical in multi-camera calibration .

  3. Positron emission tomography.

    PubMed

    Hoffman, E J; Phelps, M E

    1979-01-01

    Conventional nuclear imaging techniques utilizing lead collimation rely on radioactive tracers with little role in human physiology. The principles of imaging based on coincidence detection of the annihilation radiation produced in positron decay indicate that this mode of detection is uniquely suited for use in emission computed tomography. The only gamma-ray-emitting isotopes of carbon, nitrogen, and oxygen are positron emitters, which yield energies too high for conventional imaging techniques. Thus development of positron emitters in nuclear medicine imaging would make possible the use of a new class of physiologically active, positron-emitting radiopharmaceuticals. The application of these principles is described in the use of a physiologically active compound labeled with a positron emitter and positron-emission computed tomography to measure the local cerebral metabolic rate in humans. PMID:440173

  4. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  5. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels. PMID:27534480

  6. SIFT-Based Indoor Localization for Older Adults Using Wearable Camera

    PubMed Central

    Zhang, Boxue; Zhao, Qi; Feng, Wenquan; Sun, Mingui; Jia, Wenyan

    2015-01-01

    This paper presents an image-based indoor localization system for tracking older individuals’ movement at home. In this system, images are acquired at a low frame rate by a miniature camera worn conveniently at the chest position. The correspondence between adjacent frames is first established by matching the SIFT (scale-invariant feature transform) based key points in a pair of images. The location changes of these points are then used to estimate the position of the wearer based on use of the pinhole camera model. A preliminary study conducted in an indoor environment indicates that the location of the wearer can be estimated with an adequate accuracy. PMID:26190909

  7. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  8. Positron microscopy

    SciTech Connect

    Hulett, L.D. Jr.; Xu, J.

    1995-02-01

    The negative work function property that some materials have for positrons make possible the development of positron reemission microscopy (PRM). Because of the low energies with which the positrons are emitted, some unique applications, such as the imaging of defects, can be made. The history of the concept of PRM, and its present state of development will be reviewed. The potential of positron microprobe techniques will be discussed also.

  9. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs

    NASA Astrophysics Data System (ADS)

    De Lorenzo, G.; Chmeissani, M.; Uzun, D.; Kolstein, M.; Ozsahin, I.; Mikhaylova, E.; Arce, P.; Cañadas, M.; Ariño, G.; Calderón, Y.

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events.

  10. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  11. Submap joining smoothing and mapping for camera-based indoor localization and mapping

    NASA Astrophysics Data System (ADS)

    Bjärkefur, J.; Karlsson, A.; Grönwall, C.; Rydell, J.

    2011-06-01

    Personnel positioning is important for safety in e.g. emergency response operations. In GPS-denied environments, possible positioning solutions include systems based on radio frequency communication, inertial sensors, and cameras. Many camera-based systems create a map and localize themselves relative to that. The computational complexity of most such solutions grows rapidly with the size of the map. One way to reduce the complexity is to divide the visited region into submaps. This paper presents a novel method for merging conditionally independent submaps (generated using e.g. EKF-SLAM) by the use of smoothing. Using this approach it is possible to build large maps in close to linear time. The method is demonstrated in two indoor scenarios, where data was collected with a trolley-mounted stereo vision camera.

  12. Narrow Field-Of Visual Odometry Based on a Focused Plenoptic Camera

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Quint, F.; Stilla, U.

    2015-03-01

    In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic camera. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Furthermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image.

  13. Linking the near-surface camera-based phenological metrics with leaf chemical and spectroscopic properties

    NASA Astrophysics Data System (ADS)

    Yang, X.; Tang, J.; Mustard, J. F.; Schmitt, J.

    2012-12-01

    Plant phenology is an important indicator of climate change. Near-surface cameras provide a way to continuously monitor plant canopy development at the scale of several hundred meters, which is rarely feasible by either traditional phenological monitoring methods or remote sensing. Thus, digital cameras are being deployed in national networks such as the National Ecological Observatory Network (NEON) and PhenoCam. However, it is unclear how the camera-based phenological metrics are linked with plant physiology as measured from leaf chemical and spectroscopic properties throughout the growing season. We used the temporal trajectories of leaf chemical properties (chlorophyll a and b, carotenoids, leaf water content, leaf carbon/nitrogen content) and leaf reflectance/transmittance (300 to 2500 nm) to understand the temporal changes of camera-based phenological metrics (e.g., relative greenness), which was acquired from our Standalone Phenological Observation System installed on a tower on the island of Martha's Vineyard, MA (dominant species: Quercus alba). Leaf chemical and spectroscopic properties of three oak trees near the tower were measured weekly from June to November, 2011. We found that the chlorophyll concentration showed similar temporal trajectories to the relative greenness. However, the change of chlorophyll concentration lagged behind the change of relative greenness for about 20 days both in the spring and the fall. The relative redness is a better indicator of leaf senescence in the fall than the relative greenness. We derived relative greenness from leaf spectroscopy and found that the relative greenness from camera matched well with that from the spectroscopy in the mid-summer, but this relationship faded as leaves start to fall, exposing the branches and soil background. This work suggests that we should be cautious to interpret camera-based phenological metrics, and the relative redness could potentially be a useful indicator of fall senescence.

  14. A Global Calibration Method for Widely Distributed Cameras Based on Vanishing Features

    PubMed Central

    Wu, Xiaolong; Wu, Sentang; Xing, Zhihui; Jia, Xiang

    2016-01-01

    This paper presents a global calibration method for widely distributed vision sensors in ring-topologies. Planar target with two mutually orthogonal groups of parallel lines is needed for each camera. Firstly, the relative pose of each camera and its corresponding target is found from the vanishing points and lines. Next, an auxiliary camera is used to find the relative poses between neighboring pairs of calibration targets. Then the relative pose from each target to the reference target is initialized by the chain of transformations, followed by nonlinear optimization based on the constraint of ring-topologies. Lastly, the relative poses between the cameras are found from the relative poses of calibration targets. Synthetic data, simulation images and real experiments all demonstrate that the proposed method is reliable and accurate. The accumulated error due to multiple coordinate transformations can be adjusted effectively by the proposed method. In real experiment, eight targets are located in an area about 1200 mm × 1200 mm. The accuracy of the proposed method is about 0.465 mm when the times of coordinate transformations reach a maximum. The proposed method is simple and can be applied to different camera configurations. PMID:27338386

  15. Broadband Sub-terahertz Camera Based on Photothermal Conversion and IR Thermography

    NASA Astrophysics Data System (ADS)

    Romano, M.; Chulkov, A.; Sommier, A.; Balageas, D.; Vavilov, V.; Batsale, J. C.; Pradere, C.

    2016-05-01

    This paper describes a fast sub-terahertz (THz) camera that is based on the use of a quantum infrared camera coupled with a photothermal converter, called a THz-to-Thermal Converter (TTC), thus allowing fast image acquisition. The performance of the experimental setup is presented and discussed, with an emphasis on the advantages of the proposed method for decreasing noise in raw data and increasing the image acquisition rate. A detectivity of 160 pW Hz-0.5 per pixel has been achieved, and some examples of the practical implementation of sub-THz imaging are given.

  16. Mach-zehnder based optical marker/comb generator for streak camera calibration

    SciTech Connect

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  17. Shape Function-Based Estimation of Deformation with Moving Cameras Attached to the Deforming Body

    NASA Astrophysics Data System (ADS)

    Jokinen, O.; Ranta, I.; Haggrén, H.; Rönnholm, P.

    2016-06-01

    The paper presents a novel method to measure 3-D deformation of a large metallic frame structure of a crane under loading from one to several images, when the cameras need to be attached to the self deforming body, the structure sways during loading, and the imaging geometry is not optimal due to physical limitations. The solution is based on modeling the deformation with adequate shape functions and taking into account that the cameras move depending on the frame deformation. It is shown that the deformation can be estimated even from a single image of targeted points if the 3-D coordinates of the points are known or have been measured before loading using multiple cameras or some other measuring technique. The precision of the method is evaluated to be 1 mm at best, corresponding to 1:11400 of the average distance to the target.

  18. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction. PMID:27607253

  19. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  20. Efficient intensity-based camera pose estimation in presence of depth

    NASA Astrophysics Data System (ADS)

    El Choubassi, Maha; Nestares, Oscar; Wu, Yi; Kozintsev, Igor; Haussecker, Horst

    2013-03-01

    The widespread success of Kinect enables users to acquire both image and depth information with satisfying accuracy at relatively low cost. We leverage the Kinect output to efficiently and accurately estimate the camera pose in presence of rotation, translation, or both. The applications of our algorithm are vast ranging from camera tracking, to 3D points clouds registration, and video stabilization. The state-of-the-art approach uses point correspondences for estimating the pose. More explicitly, it extracts point features from images, e.g., SURF or SIFT, and builds their descriptors, and matches features from different images to obtain point correspondences. However, while features-based approaches are widely used, they perform poorly in scenes lacking texture due to scarcity of features or in scenes with repetitive structure due to false correspondences. Our algorithm is intensity-based and requires neither point features' extraction, nor descriptors' generation/matching. Due to absence of depth, the intensity-based approach alone cannot handle camera translation. With Kinect capturing both image and depth frames, we extend the intensity-based algorithm to estimate the camera pose in case of both 3D rotation and translation. The results are quite promising.

  1. Trap-Based Beam Formation Mechanisms and the Development of an Ultra-High-Energy-Resolution Cryogenic Positron Beam

    NASA Astrophysics Data System (ADS)

    Natisin, Michael Ryan

    The focus of this dissertation is the development of a positron beam with significantly improved energy resolution over any beam resolution previously available. While positron interactions with matter are important in a variety of contexts, the range of experimental data available regarding fundamental positron-matter interactions is severely limited as compared to analogous electron-matter processes. This difference is due largely to the difficulties encountered in creating positron beams with narrow energy spreads. Described here is a detailed investigation into the physical processes operative during positron cooling and beam formation in state-of-the-art, trap-based beam systems. These beams rely on buffer gas traps (BGTs), in which positrons are trapped and cooled to the ambient temperature (300 K) through interactions with a molecular gas, and subsequently ejected as a high resolution pulsed beam. Experimental measurements, analytic models, and simulation results are used to understand the creation and characterization of these beams, with a focus on the mechanisms responsible for setting beam energy resolution. The information gained from these experimental and theoretical studies was then used to design, construct, and operate a next-generation high-energy-resolution beam system. In this new system, the pulsed beam from the BGT is magnetically guided into a new apparatus which re-traps the positrons, cools them to 50 K, and re-emits them as a pulsed beam with superior beam characteristics. Using these techniques, positron beams with total energy spreads as low as 6.9 meV FWHM are produced. This represents a factor of ˜ 5 improvement over the previous state-of-the-art, making it the largest increase in positron beam energy resolution since the development of advanced moderator techniques in the early 1980's. These beams also have temporal spreads of 0.9 mus FWHM and radial spreads of 1 mm FWHM. This represents improvements by factors of ˜2 and 10

  2. A Kinect™ camera based navigation system for percutaneous abdominal puncture

    NASA Astrophysics Data System (ADS)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable

  3. Positron microprobe at LLNL

    SciTech Connect

    Asoka, P; Howell, R; Stoeffl, W

    1998-11-01

    The electron linac based positron source at Lawrence Livermore National Laboratory (LLNL) provides the world's highest current beam of keV positrons. We are building a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with sub-micron resolution. The widely spaced and intense positron packets from the tungsten moderator at the end of the 100 MeV LLNL linac are captured and trapped in a magnetic bottle. The positrons are then released in 1 ns bunches at a 20 MHz repetition rate. With a three-stage re-moderation we will compress the cm-sized original beam to a 1 micro-meter diameter final spot on the target. The buncher will compress the arrival time of positrons on the target to less than 100 ps. A detector array with up to 60 BaF2 crystals in paired coincidence will measure the annihilation radiation with high efficiency and low background. The energy of the positrons can be varied from less than 1 keV up to 50 keV.

  4. Intense positron beam at KEK

    NASA Astrophysics Data System (ADS)

    Kurihara, Toshikazu; Yagishita, Akira; Enomoto, Atsushi; Kobayashi, Hitoshi; Shidara, Tetsuo; Shirakawa, Akihiro; Nakahara, Kazuo; Saitou, Haruo; Inoue, Kouji; Nagashima, Yasuyuki; Hyodo, Toshio; Nagai, Yasuyoshi; Hasegawa, Masayuki; Inoue, Yoshi; Kogure, Yoshiaki; Doyama, Masao

    2000-08-01

    A positron beam is a useful probe for investigating the electronic states in solids, especially concerning the surface states. The advantage of utilizing positron beams is in their simpler interactions with matter, owing to the absence of any exchange forces, in contrast to the case of low-energy electrons. However, such studies as low-energy positron diffraction, positron microscopy and positronium (Ps) spectroscopy, which require high intensity slow-positron beams, are very limited due to the poor intensity obtained from a conventional radioactive-isotope-based positron source. In conventional laboratories, the slow-positron intensity is restricted to 10 6 e +/s due to the strength of the available radioactive source. An accelerator based slow-positron source is a good candidate for increasing the slow-positron intensity. One of the results using a high intensity pulsed positron beam is presented as a study of the origins of a Ps emitted from SiO 2. We also describe the two-dimensional angular correlation of annihilation radiation (2D-ACAR) measurement system with slow-positron beams and a positron microscope.

  5. Ultrashort megaelectronvolt positron beam generation based on laser-accelerated electrons

    NASA Astrophysics Data System (ADS)

    Xu, Tongjun; Shen, Baifei; Xu, Jiancai; Li, Shun; Yu, Yong; Li, Jinfeng; Lu, Xiaoming; Wang, Cheng; Wang, Xinliang; Liang, Xiaoyan; Leng, Yuxin; Li, Ruxin; Xu, Zhizhan

    2016-03-01

    Experimental generation of ultrashort MeV positron beams with high intensity and high density using a compact laser-driven setup is reported. A high-density gas jet is employed experimentally to generate MeV electrons with high charge; thus, a charge-neutralized MeV positron beam with high density is obtained during laser-accelerated electrons irradiating high-Z solid targets. It is a novel electron-positron source for the study of laboratory astrophysics. Meanwhile, the MeV positron beam is pulsed with an ultrashort duration of tens of femtoseconds and has a high peak intensity of 7.8 × 1021 s-1, thus allows specific studies of fast kinetics in millimeter-thick materials with a high time resolution and exhibits potential for applications in positron annihilation spectroscopy.

  6. Camera-based noncontact metrology for static/dynamic testing of flexible multibody systems

    NASA Astrophysics Data System (ADS)

    Pai, P. Frank; Ramanathan, Suresh; Hu, Jiazhu; Chernova, DarYa K.; Qian, Xin; Wu, Genyong

    2010-08-01

    Presented here is a camera-based noncontact measurement theory for static/dynamic testing of flexible multibody systems that undergo large rigid, elastic and/or plastic deformations. The procedure and equations for accurate estimation of system parameters (i.e. the location and focal length of each camera and the transformation matrix relating its image and object coordinate systems) using an L-frame with four retroreflective markers are described in detail. Moreover, a method for refinement of estimated system parameters and establishment of a lens distortion model for correcting optical distortions using a T-wand with three markers is described. Dynamically deformed geometries of a multibody system are assumed to be obtained by tracing the three-dimensional instantaneous coordinates of markers adhered to the system's outside surfaces, and cameras and triangulation techniques are used for capturing marker images and identifying markers' coordinates. Furthermore, an EAGLE-500 motion analysis system is used to demonstrate measurements of static/dynamic deformations of six different flexible multibody systems. All numerical simulations and experimental results show that the use of camera-based motion analysis systems is feasible and accurate enough for static/dynamic experiments on flexible multibody systems, especially those that cannot be measured using conventional contact sensors.

  7. Skyline matching based camera orientation from images and mobile mapping point clouds

    NASA Astrophysics Data System (ADS)

    Hofmann, S.; Eggert, D.; Brenner, C.

    2014-05-01

    Mobile Mapping is widely used for collecting large amounts of geo-referenced data. An important role plays sensor fusion, in order to evaluate multiple sensors such as laser scanner and cameras jointly. This requires to determine the relative orientation between sensors. Based on data of a RIEGL VMX-250 mobile mapping system equipped with two laser scanners, four optional cameras, and a highly precise GNSS/IMU system, we propose an approach to improve camera orientations. A manually determined orientation is used as an initial approximation for matching a large number of points in optical images and the corresponding projected scan images. The search space of the point correspondences is reduced to skylines found in both the optical as well as the scan image. The skyline determination is based on alpha shapes, the actual matching is done via an adapted ICP algorithm. The approximate values of the relative orientation are used as starting values for an iterative resection process. Outliers are removed at several stages of the process. Our approach is fully automatic and improves the camera orientation significantly.

  8. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  9. Data acquisition system based on the Nios II for a CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  10. Demonstration of three-dimensional imaging based on handheld Compton camera

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Nishiyama, T.; Taya, T.; Kabuki, S.

    2015-11-01

    Compton cameras are potential detectors that are capable of performing measurements across a wide energy range for medical imaging applications, such as in nuclear medicine and ion beam therapy. In previous work, we developed a handheld Compton camera to identify environmental radiation hotspots. This camera consists of a 3D position-sensitive scintillator array and multi-pixel photon counter arrays. In this work, we reconstructed the 3D image of a source via list-mode maximum likelihood expectation maximization and demonstrated the imaging performance of the handheld Compton camera. Based on both the simulation and the experiments, we confirmed that multi-angle data acquisition of the imaging region significantly improved the spatial resolution of the reconstructed image in the direction vertical to the detector. The experimental spatial resolutions in the X, Y, and Z directions at the center of the imaging region were 6.81 mm ± 0.13 mm, 6.52 mm ± 0.07 mm and 6.71 mm ± 0.11 mm (FWHM), respectively. Results of multi-angle data acquisition show the potential of reconstructing 3D source images.

  11. A low-cost web-camera-based multichannel fiber-optic spectrometer structure

    NASA Astrophysics Data System (ADS)

    Sumriddetchkajorn, Sarun

    2010-11-01

    This paper shows how a web camera can be used to realize a low-cost multichannel fiber-optic spectrometer suitable for educational purposes as well as for quality control purposes in small and medium enterprises. Our key idea is to arrange N input optical fibers in a line and use an external dispersive element to separate incoming optical beams into their associated spectral components in a two-dimensional (2-D) space. As a web camera comes with a plastic lens, each set of spectral components is imaged onto the 2-D image sensor of the web camera. For our demonstration, we build a 5-channel web-camera based fiber-optic optical spectrometer and simply calibrate it by using eight lightsources with known peak wavelengths. In this way, it functions as a 5-channel wavelength meter in a 380-700 nm wavelength range with a calculated wavelength resolution of 0.67 nm/pixel. Experimental results show that peak operating wavelengths of a light emitting diode (λp = 525 nm) and a laser pointer (λp = 655 nm) can be measured with a +/-2.5 nm wavelength accuracy. Total cost of our 5-channel fiber-optic spectrometer is ~USD92.50.

  12. Towards direct reconstruction from a gamma camera based on compton scattering

    SciTech Connect

    Cree, M.J.; Bones, P.J. . Dept. of Electrical and Electronic Engineering)

    1994-06-01

    The Compton scattering camera (sometimes called the electronically collimated camera) has been shown by others to have the potential to better the photon counting statistics and the energy resolution of the Anger camera for imaging in SPECT. By using coincident detection of Compton scattering events on two detecting planes, a photon can be localized to having been sourced on the surface of a cone. New algorithms are needed to achieve fully three-dimensional reconstruction of the source distribution from such a camera. If a complete set of cone-surface projections are collected over an infinitely extending plane, it is shown that the reconstruction problem is not only analytically solvable, but also overspecified in the absence of measurement uncertainties. Two approaches to direct reconstruction are proposed, both based on the photons which travel perpendicularly between the detector planes. Results of computer simulations are presented which demonstrate the ability of the algorithms to achieve useful reconstructions in the absence of measurement uncertainties (other than those caused by quantization). The modifications likely to be required in the presence of realistic measurement uncertainties are discussed.

  13. Development and calibration of the Moon-based EUV camera for Chang'e-3

    NASA Astrophysics Data System (ADS)

    Chen, Bo; Song, Ke-Fei; Li, Zhao-Hui; Wu, Qing-Wen; Ni, Qi-Liang; Wang, Xiao-Dong; Xie, Jin-Jiang; Liu, Shi-Jie; He, Ling-Ping; He, Fei; Wang, Xiao-Guang; Chen, Bin; Zhang, Hong-Ji; Wang, Xiao-Dong; Wang, Hai-Feng; Zheng, Xin; E, Shu-Lin; Wang, Yong-Cheng; Yu, Tao; Sun, Liang; Wang, Jin-Ling; Wang, Zhi; Yang, Liang; Hu, Qing-Long; Qiao, Ke; Wang, Zhong-Su; Yang, Xian-Wei; Bao, Hai-Ming; Liu, Wen-Guang; Li, Zhe; Chen, Ya; Gao, Yang; Sun, Hui; Chen, Wen-Chang

    2014-12-01

    The process of development and calibration for the first Moon-based extreme ultraviolet (EUV) camera to observe Earth's plasmasphere is introduced and the design, test and calibration results are presented. The EUV camera is composed of a multilayer film mirror, a thin film filter, a photon-counting imaging detector, a mechanism that can adjust the direction in two dimensions, a protective cover, an electronic unit and a thermal control unit. The center wavelength of the EUV camera is 30.2 nm with a bandwidth of 4.6 nm. The field of view is 14.7° with an angular resolution of 0.08°, and the sensitivity of the camera is 0.11 count s-1 Rayleigh-1. The geometric calibration, the absolute photometric calibration and the relative photometric calibration are carried out under different temperatures before launch to obtain a matrix that can correct geometric distortion and a matrix for relative photometric correction, which are used for in-orbit correction of the images to ensure their accuracy.

  14. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  15. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  16. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  17. Using ground-based stereo cameras to derive cloud-level wind fields.

    PubMed

    Porter, John N; Cao, Guang Xia

    2009-08-15

    Upper-level wind fields are obtained by tracking the motion of cloud features as seen in calibrated ground-based stereo cameras. By tracking many cloud features, it is possible to obtain horizontal wind speed and direction over a cone area throughout the troposphere. Preliminary measurements were made at the Mauna Loa Observatory, and resulting wind measurements are compared with winds from the Hilo, Hawaii radiosondes. PMID:19684790

  18. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    NASA Astrophysics Data System (ADS)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  19. Star-field identification algorithm. [for implementation on CCD-based imaging camera

    NASA Technical Reports Server (NTRS)

    Scholl, M. S.

    1993-01-01

    A description of a new star-field identification algorithm that is suitable for implementation on CCD-based imaging cameras is presented. The minimum identifiable star pattern element consists of an oriented star triplet defined by three stars, their celestial coordinates, and their visual magnitudes. The algorithm incorporates tolerance to faulty input data, errors in the reference catalog, and instrument-induced systematic errors.

  20. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  1. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  2. Line-based camera calibration with lens distortion correction from a single image

    NASA Astrophysics Data System (ADS)

    Zhou, Fuqiang; Cui, Yi; Gao, He; Wang, Yexin

    2013-12-01

    Camera calibration is a fundamental and important step in many machine vision applications. For some practical situations, computing camera parameters from merely a single image is becoming increasingly feasible and significant. However, the existing single view based calibration methods have various disadvantages such as ignoring lens distortion, requiring some prior knowledge or special calibration environment, and so on. To address these issues, we propose a line-based camera calibration method with lens distortion correction from a single image using three squares with unknown length. Initially, the radial distortion coefficients are obtained through a non-linear optimization process which is isolated from the pin-hole model calibration, and the detected distorted lines of all the squares are corrected simultaneously. Subsequently, the corresponding lines used for homography estimation are normalized to avoid the specific unstable case, and the intrinsic parameters are calculated from the sufficient restrictions provided by vectors of homography matrix. To evaluate the performance of the proposed method, both simulative and real experiments have been carried out and the results show that the proposed method is robust under general conditions and it achieves comparable measurement accuracy in contrast with the traditional multiple view based calibration method using 2D chessboard target.

  3. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors.

    PubMed

    Calderón, Y; Chmeissani, M; Kolstein, M; De Lorenzo, G

    2014-06-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm(2) area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm(3). The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10(4) Bq source activities with equal efficiency and is completely saturated at 10(9) Bq. The efficiency of the system is evaluated using a simulated (18) F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8. PMID:24932209

  4. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  5. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    SciTech Connect

    Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  6. 18F-Labeled Silicon-Based Fluoride Acceptors: Potential Opportunities for Novel Positron Emitting Radiopharmaceuticals

    PubMed Central

    Bernard-Gauthier, Vadim; Wängler, Carmen; Wängler, Bjoern; Schirrmacher, Ralf

    2014-01-01

    Background. Over the recent years, radiopharmaceutical chemistry has experienced a wide variety of innovative pushes towards finding both novel and unconventional radiochemical methods to introduce fluorine-18 into radiotracers for positron emission tomography (PET). These “nonclassical” labeling methodologies based on silicon-, boron-, and aluminium-18F chemistry deviate from commonplace bonding of an [18F]fluorine atom (18F) to either an aliphatic or aromatic carbon atom. One method in particular, the silicon-fluoride-acceptor isotopic exchange (SiFA-IE) approach, invalidates a dogma in radiochemistry that has been widely accepted for many years: the inability to obtain radiopharmaceuticals of high specific activity (SA) via simple IE. Methodology. The most advantageous feature of IE labeling in general is that labeling precursor and labeled radiotracer are chemically identical, eliminating the need to separate the radiotracer from its precursor. SiFA-IE chemistry proceeds in dipolar aprotic solvents at room temperature and below, entirely avoiding the formation of radioactive side products during the IE. Scope of Review. A great plethora of different SiFA species have been reported in the literature ranging from small prosthetic groups and other compounds of low molecular weight to labeled peptides and most recently affibody molecules. Conclusions. The literature over the last years (from 2006 to 2014) shows unambiguously that SiFA-IE and other silicon-based fluoride acceptor strategies relying on 18F− leaving group substitutions have the potential to become a valuable addition to radiochemistry. PMID:25157357

  7. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    PubMed Central

    Saha, Krishnendu; Straus, Kenneth J.; Chen, Yu.; Glick, Stephen J.

    2014-01-01

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction. PMID:25371555

  8. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography.

    PubMed

    Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction. PMID:25371555

  9. An enhanced high-resolution EMCCD-based gamma camera using SiPM side detection.

    PubMed

    Heemskerk, J W T; Korevaar, M A N; Huizenga, J; Kreuger, R; Schaart, D R; Goorden, M C; Beekman, F J

    2010-11-21

    Electron-multiplying charge-coupled devices (EMCCDs) coupled to scintillation crystals can be used for high-resolution imaging of gamma rays in scintillation counting mode. However, the detection of false events as a result of EMCCD noise deteriorates the spatial and energy resolution of these gamma cameras and creates a detrimental background in the reconstructed image. In order to improve the performance of an EMCCD-based gamma camera with a monolithic scintillation crystal, arrays of silicon photon-multipliers (SiPMs) can be mounted on the sides of the crystal to detect escaping scintillation photons, which are otherwise neglected. This will provide a priori knowledge about the correct number and energies of gamma interactions that are to be detected in each CCD frame. This information can be used as an additional detection criterion, e.g. for the rejection of otherwise falsely detected events. The method was tested using a gamma camera based on a back-illuminated EMCCD, coupled to a 3 mm thick continuous CsI:Tl crystal. Twelve SiPMs have been mounted on the sides of the CsI:Tl crystal. When the information of the SiPMs is used to select scintillation events in the EMCCD image, the background level for (99m)Tc is reduced by a factor of 2. Furthermore, the SiPMs enable detection of (125)I scintillations. A hybrid SiPM-/EMCCD-based gamma camera thus offers great potential for applications such as in vivo imaging of gamma emitters. PMID:21030743

  10. Modeling and simulation of Positron Emission Mammography (PEM) based on double-sided CdTe strip detectors

    NASA Astrophysics Data System (ADS)

    Ozsahin, I.; Unlu, M. Z.

    2014-03-01

    Breast cancer is the most common leading cause of cancer death among women. Positron Emission Tomography (PET) Mammography, also known as Positron Emission Mammography (PEM), is a method for imaging primary breast cancer. Over the past few years, PEMs based on scintillation crystals dramatically increased their importance in diagnosis and treatment of early stage breast cancer. However, these detectors have significant limitations like poor energy resolution resulting with false-negative result (missed cancer), and false-positive result which leads to suspecting cancer and suggests an unnecessary biopsy. In this work, a PEM scanner based on CdTe strip detectors is simulated via the Monte Carlo method and evaluated in terms of its spatial resolution, sensitivity, and image quality. The spatial resolution is found to be ~ 1 mm in all three directions. The results also show that CdTe strip detectors based PEM scanner can produce high resolution images for early diagnosis of breast cancer.

  11. Monte Carlo simulations of compact gamma cameras based on avalanche photodiodes.

    PubMed

    Després, Philippe; Funk, Tobias; Shah, Kanai S; Hasegawa, Bruce H

    2007-06-01

    Avalanche photodiodes (APDs), and in particular position-sensitive avalanche photodiodes (PSAPDs), are an attractive alternative to photomultiplier tubes (PMTs) for reading out scintillators for PET and SPECT. These solid-state devices offer high gain and quantum efficiency, and can potentially lead to more compact and robust imaging systems with improved spatial and energy resolution. In order to evaluate this performance improvement, we have conducted Monte Carlo simulations of gamma cameras based on avalanche photodiodes. Specifically, we investigated the relative merit of discrete and PSAPDs in a simple continuous crystal gamma camera. The simulated camera was composed of either a 4 x 4 array of four channels 8 x 8 mm2 PSAPDs or an 8 x 8 array of 4 x 4 mm2 discrete APDs. These configurations, requiring 64 channels readout each, were used to read the scintillation light from a 6 mm thick continuous CsI:Tl crystal covering the entire 3.6 x 3.6 cm2 photodiode array. The simulations, conducted with GEANT4, accounted for the optical properties of the materials, the noise characteristics of the photodiodes and the nonlinear charge division in PSAPDs. The performance of the simulated camera was evaluated in terms of spatial resolution, energy resolution and spatial uniformity at 99mTc (140 keV) and 125I ( approximately 30 keV) energies. Intrinsic spatial resolutions of 1.0 and 0.9 mm were obtained for the APD- and PSAPD-based cameras respectively for 99mTc, and corresponding values of 1.2 and 1.3 mm FWHM for 125I. The simulations yielded maximal energy resolutions of 7% and 23% for 99mTc and 125I, respectively. PSAPDs also provided better spatial uniformity than APDs in the simple system studied. These results suggest that APDs constitute an attractive technology especially suitable to build compact, small field of view gamma cameras dedicated, for example, to small animal or organ imaging. PMID:17505089

  12. Stereoscopic ground-based determination of the cloud base height: theory of camera position calibration with account for lens distortion

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Postylyakov, Oleg V.

    2016-05-01

    For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The earlier paper described a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras in paraxial (first-order geometric optics) approximation and was applied for the central part of the sky frames. This paper describes the model of experiment which takes into account lens distortion in Seidel approximation (depending on the third order of the distance from optical axis). We developed procedure of simultaneous camera position calibration and estimation of parameters of lens distortion in Seidel approximation.

  13. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  14. Binarization method based on evolution equation for document images produced by cameras

    NASA Astrophysics Data System (ADS)

    Wang, Yan; He, Chuanjiang

    2012-04-01

    We present an evolution equation-based binarization method for document images produced by cameras. Unlike the existing thresholding techniques, the idea behind our method is that a family of gradually binarized images is obtained by the solution of an evolution partial differential equation, starting with an original image. In our formulation, the evolution is controlled by a global force and a local force, both of which have opposite sign inside and outside the object of interests in the original image. A simple finite difference scheme with a significantly larger time step is used to solve the evolution equation numerically; the desired binarization is typically obtained after only one or two iterations. Experimental results on 122 camera document images show that our method yields good visual quality and OCR performance.

  15. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%. PMID:26193503

  16. Electro optical design for a space camera based on MODTRAN data analysis

    NASA Astrophysics Data System (ADS)

    Haghshenas, Javad

    2014-11-01

    Electro-Optical design of a push-broom space camera for a Low Earth Orbit (LEO) remote sensing satellite is discussed in this paper. An atmosphere analysis is performed based on ModTran algorithm and the total radiance of visible light reached to the camera entrance diameter is simulated by Atmosphere radiative transfer software PcModWin. Simulation is done for various conditions of sun zenith angles and earth surface albedos to predict the signal performance in different times and locations. According to the proposed simulation of total radiance incidence, appropriate linear CCD is chosen and then an optical design is done to completely satisfy electro-optics requirements. Optical design is based on Schmidt-Cassegrain scheme, which results in simple fabrication and high accuracy. Proposed electro-optical camera satisfies 5.9 meter ground resolution with image swath of higher than 23 km on the earth surface. Satellite is assumed to be at 681km altitude with 6.8km/s ground track speed.

  17. Improving wavelet denoising based on an in-depth analysis of the camera color processing

    NASA Astrophysics Data System (ADS)

    Seybold, Tamara; Plichta, Mathias; Stechele, Walter

    2015-02-01

    While Denoising is an extensively studied task in signal processing research, most denoising methods are designed and evaluated using readily processed image data, e.g. the well-known Kodak data set. The noise model is usually additive white Gaussian noise (AWGN). This kind of test data does not correspond to nowadays real-world image data taken with a digital camera. Using such unrealistic data to test, optimize and compare denoising algorithms may lead to incorrect parameter tuning or suboptimal choices in research on real-time camera denoising algorithms. In this paper we derive a precise analysis of the noise characteristics for the different steps in the color processing. Based on real camera noise measurements and simulation of the processing steps, we obtain a good approximation for the noise characteristics. We further show how this approximation can be used in standard wavelet denoising methods. We improve the wavelet hard thresholding and bivariate thresholding based on our noise analysis results. Both the visual quality and objective quality metrics show the advantage of the proposed method. As the method is implemented using look-up-tables that are calculated before the denoising step, our method can be implemented with very low computational complexity and can process HD video sequences real-time in an FPGA.

  18. Secure chaotic map based block cryptosystem with application to camera sensor networks.

    PubMed

    Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled

    2011-01-01

    Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network. PMID:22319371

  19. Infrared line cameras based on linear arrays for industrial temperature measurement

    NASA Astrophysics Data System (ADS)

    Drogmoeller, Peter; Hofmann, Guenter; Budzier, Helmut; Reichardt, Thomas; Zimmerhackl, Manfred

    2002-03-01

    The PYROLINE/ MikroLine cameras provide continuous, non-contact measurement of linear temperature distributions. Operation in conjunction with the IR_LINE software provides data recording, real-time graphical analysis, process integration and camera-control capabilities. One system is based on pyroelectric line sensors with either 128 or 256 elements, operating at frame rates of 128 and 544 Hz respectively. Temperatures between 0 and 1300DGRC are measurable in four distinct spectral ranges; 8-14micrometers for low temperatures, 3-5micrometers for medium temperatures, 4.8-5.2micrometers for glass-temperature applications and 1.4-1.8micrometers for high temperatures. A newly developed IR-line camera (HRP 250) based upon a thermoelectrically cooled, 160-element, PbSe detector array operating in the 3 - 5 micrometers spectral range permits the thermal gradients of fast moving targets to be measured in the range 50 - 180 degree(s)C at a maximum frequency of 18kHz. This special system was used to measure temperature distributions on rotating tires at velocities of more than 300 km/h (190 mph). A modified version of this device was used for real-time measurement of disk-brake rotors under load. Another line camera consisting a 256 element InGaAs array was developed for the spectral range of 1.4 - 1.8 micrometers to detect impurities of polypropylene and polyethylene in raw cotton at frequencies of 2.5 - 5 kHz.

  20. AOTF-based NO2 camera, results from the AROMAT-2 campaign

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Fussen, Didier; Vanhamel, Jurgen; Van Opstal, Bert; Maes, Jeroen; Merlaud, Alexis; Stebel, Kerstin; Schuettemeyer, Dirk

    2016-04-01

    A hyperspectral imager based on an acousto-optical tunable filter (AOTF) has been developed in the frame of the ALTIUS mission (atmospheric limb tracker for the investigation of the upcoming stratosphere). ALTIUS is a three-channel (UV, VIS, NIR) space-borne limb sounder aiming at the retrieval of concentration profiles of important trace species (O3, NO2, aerosols and more) with a good vertical resolution. An optical breadboard was built from the VIS channel concept and is now serving as a ground-based remote sensing instrument. Its good spectral resolution (0.6nm) coupled to its natural imaging capabilities (6° square field of view sampled by a 512x512 pixels sensor) make it suitable for the measurement of 2D fields of NO2, similarly to what is nowadays achieved with SO2 cameras. Our NO2 camera was one of the instruments that took part to the second Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT-2) campaign in August 2015. It was pointed to the smokestacks of the coal and oil burning power plant of Turceni (Romania) in order to image the exhausted field of NO2 and derive slant columns and instantaneous emission fluxes. The ultimate goal of the AROMAT campaigns is to prepare the validation of TROPOMI onboard Sentinel-5P. We will briefly describe the instrumental concept of the NO2 camera, its heritage from the ALTIUS mission, and its advantages compared to previous attempts of reaching the same goal. Key results obtained with the camera during the AROMAT-2 campaign will be presented and further improvements will be discussed.

  1. Performance evaluation of a dual-crystal APD-based detector modules for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Pepin, Catherine M.; Bérard, Philippe; Cadorette, Jules; Tétrault, Marc-André; Leroux, Jean-Daniel; Michaud, Jean-Baptiste; Robert, Stéfan; Dautet, Henri; Davies, Murray; Fontaine, Réjean; Lecomte, Roger

    2006-03-01

    Positron Emission Tomography (PET) scanners dedicated to small animal studies have seen a swift development in recent years. Higher spatial resolution, greater sensitivity and faster scanning procedures are the leading factors driving further improvements. The new LabPET TM system is a second-generation APD-based animal PET scanner that combines avalanche photodiode (APD) technology with a highly integrated, fully digital, parallel electronic architecture. This work reports on the performance characteristics of the LabPET quad detector module, which consists of LYSO/LGSO phoswich assemblies individually coupled to reach-through APDs. Individual crystals 2×2×~10 mm 3 in size are optically coupled in pair along one long side to form the phoswich detectors. Although the LYSO and LGSO photopeaks partially overlap, the good energy resolution and decay time difference allow for efficient crystal identification by pulse-shape discrimination. Conventional analog discrimination techniques result in significant misidentification, but advanced digital signal processing methods make it possible to circumvent this limitation, achieving virtually error-free decoding. Timing resolution results of 3.4 ns and 4.5 ns FWHM have been obtained for LYSO and LGSO, respectively, using analog CFD techniques. However, test bench measurements with digital techniques have shown that resolutions in the range of 2 to 4 ns FWHM can be achieved.

  2. FPGA-Based Front-End Electronics for Positron Emission Tomography

    PubMed Central

    Haselman, Michael; DeWitt, Don; McDougald, Wendy; Lewellen, Thomas K.; Miyaoka, Robert; Hauck, Scott

    2010-01-01

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA’s low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm. PMID:21961085

  3. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    SciTech Connect

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-15

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because {sup 134}Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as {sup 40}K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi{sub 4}Ge{sub 3}O{sub 12} (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from {sup 134}Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  4. Low background high efficiency radiocesium detection system based on positron emission tomography technology.

    PubMed

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-01

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because (134)Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as (40)K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi4Ge3O12 (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from (134)Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium. PMID:24089828

  5. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    NASA Astrophysics Data System (ADS)

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-01

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because 134Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as 40K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi4Ge3O12 (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from 134Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  6. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  7. Motion measurement of SAR antenna based on high frame rate camera

    NASA Astrophysics Data System (ADS)

    Li, Q.; Cao, R.; Feng, H.; Xu, Z.

    2015-03-01

    Synthetic Aperture Radar (SAR) is currently in the marine, agriculture, geology and other fields are widely used, while the SAR antenna is one of the most important subsystems. Performance of antenna has a significant impact on the SAR sensitivity, azimuth resolution, image blur degree and other parameter. To improve SAR resolution, SAR antenna is designed and fabricated according to flexible expandable style. However, the movement of flexible antenna will have a greater impact on accuracy of SAR systems, so the motion measurement of the flexible antenna is an urgent problem. This paper studied motion measurements method based on high frame rate camera, designed and completed a flexible antenna motion measurement experiment. In the experiment the main IMU and the sub IMU were placed at both ends of the cantilever, which is simulation of flexible antenna, the high frame rate camera was placed above the main IMU, and the imaging target was set on side of the sub IMU. When the cantilever motion occurs, IMU acquired spatial coordinates of cantilever movement in real-time, and high frame rate camera captured a series of target images, and then the images was input into JTC to obtain the cantilever motion coordinates. Through the contrast and analysis of measurement results, the measurement accuracy of flexible antenna motion is verified.

  8. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals. PMID:26406525

  9. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement. PMID:26928458

  10. Camera-Based Lock-in and Heterodyne Carrierographic Photoluminescence Imaging of Crystalline Silicon Wafers

    NASA Astrophysics Data System (ADS)

    Sun, Q. M.; Melnikov, A.; Mandelis, A.

    2015-06-01

    Carrierographic (spectrally gated photoluminescence) imaging of a crystalline silicon wafer using an InGaAs camera and two spread super-bandgap illumination laser beams is introduced in both low-frequency lock-in and high-frequency heterodyne modes. Lock-in carrierographic images of the wafer up to 400 Hz modulation frequency are presented. To overcome the frame rate and exposure time limitations of the camera, a heterodyne method is employed for high-frequency carrierographic imaging which results in high-resolution near-subsurface information. The feasibility of the method is guaranteed by the typical superlinearity behavior of photoluminescence, which allows one to construct a slow enough beat frequency component from nonlinear mixing of two high frequencies. Intensity-scan measurements were carried out with a conventional single-element InGaAs detector photocarrier radiometry system, and the nonlinearity exponent of the wafer was found to be around 1.7. Heterodyne images of the wafer up to 4 kHz have been obtained and qualitatively analyzed. With the help of the complementary lock-in and heterodyne modes, camera-based carrierographic imaging in a wide frequency range has been realized for fundamental research and industrial applications toward in-line nondestructive testing of semiconductor materials and devices.

  11. Range camera calibration based on image sequences and dense comprehensive error statistics

    NASA Astrophysics Data System (ADS)

    Karel, Wilfried; Pfeifer, Norbert

    2009-01-01

    This article concentrates on the integrated self-calibration of both the interior orientation and the distance measurement system of a time-of-flght range camera (photonic mixer device). Unlike other approaches that investigate individual distortion factors separately, in the presented approach all calculations are based on the same data set that is captured without auxiliary devices serving as high-order reference, but with the camera being guided by hand. Flat, circular targets stuck on a planar whiteboard and with known positions are automatically tracked throughout the amplitude layer of long image sequences. These image observations are introduced into a bundle block adjustment, which on the one hand results in the determination of the interior orientation. Capitalizing the known planarity of the imaged board, the reconstructed exterior orientations furthermore allow for the derivation of reference values of the actual distance observations. Eased by the automatic reconstruction of the cameras trajectory and attitude, comprehensive statistics are generated, which are accumulated into a 5-dimensional matrix in order to be manageable. The marginal distributions of this matrix are inspected for the purpose of system identification, whereupon its elements are introduced into another least-squares adjustment, finally leading to clear range correction models and parameters.

  12. Physical Activity Recognition Based on Motion in Images Acquired by a Wearable Camera

    PubMed Central

    Zhang, Hong; Li, Lu; Jia, Wenyan; Fernstrom, John D.; Sclabassi, Robert J.; Mao, Zhi-Hong; Sun, Mingui

    2011-01-01

    A new technique to extract and evaluate physical activity patterns from image sequences captured by a wearable camera is presented in this paper. Unlike standard activity recognition schemes, the video data captured by our device do not include the wearer him/herself. The physical activity of the wearer, such as walking or exercising, is analyzed indirectly through the camera motion extracted from the acquired video frames. Two key tasks, pixel correspondence identification and motion feature extraction, are studied to recognize activity patterns. We utilize a multiscale approach to identify pixel correspondences. When compared with the existing methods such as the Good Features detector and the Speed-up Robust Feature (SURF) detector, our technique is more accurate and computationally efficient. Once the pixel correspondences are determined which define representative motion vectors, we build a set of activity pattern features based on motion statistics in each frame. Finally, the physical activity of the person wearing a camera is determined according to the global motion distribution in the video. Our algorithms are tested using different machine learning techniques such as the K-Nearest Neighbor (KNN), Naive Bayesian and Support Vector Machine (SVM). The results show that many types of physical activities can be recognized from field acquired real-world video. Our results also indicate that, with a design of specific motion features in the input vectors, different classifiers can be used successfully with similar performances. PMID:21779142

  13. A novel multi-digital camera system based on tilt-shift photography technology.

    PubMed

    Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  14. Camera characterization using back-propagation artificial neutral network based on Munsell system

    NASA Astrophysics Data System (ADS)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  15. Random versus Game Trail-Based Camera Trap Placement Strategy for Monitoring Terrestrial Mammal Communities

    PubMed Central

    Cusack, Jeremy J.; Dickman, Amy J.; Rowcliffe, J. Marcus; Carbone, Chris; Macdonald, David W.; Coulson, Tim

    2015-01-01

    Camera trap surveys exclusively targeting features of the landscape that increase the probability of photographing one or several focal species are commonly used to draw inferences on the richness, composition and structure of entire mammal communities. However, these studies ignore expected biases in species detection arising from sampling only a limited set of potential habitat features. In this study, we test the influence of camera trap placement strategy on community-level inferences by carrying out two spatially and temporally concurrent surveys of medium to large terrestrial mammal species within Tanzania’s Ruaha National Park, employing either strictly game trail-based or strictly random camera placements. We compared the richness, composition and structure of the two observed communities, and evaluated what makes a species significantly more likely to be caught at trail placements. Observed communities differed marginally in their richness and composition, although differences were more noticeable during the wet season and for low levels of sampling effort. Lognormal models provided the best fit to rank abundance distributions describing the structure of all observed communities, regardless of survey type or season. Despite this, carnivore species were more likely to be detected at trail placements relative to random ones during the dry season, as were larger bodied species during the wet season. Our findings suggest that, given adequate sampling effort (> 1400 camera trap nights), placement strategy is unlikely to affect inferences made at the community level. However, surveys should consider more carefully their choice of placement strategy when targeting specific taxonomic or trophic groups. PMID:25950183

  16. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  17. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, V.; Djambazov, L.; Dorner, D.; Farnier, C.; Gendotti, A.; Grimm, O.; von Gunten, H. P.; Hildebrand, D.; Horisberger, U.; Huber, B.; Kim, K.-S.; Köhne, J.-H.; Krähenbühl, T.; Krumm, B.; Lee, M.; Lenain, J.-P.; Lorenz, E.; Lustermann, W.; Lyard, E.; Mannheim, K.; Meharga, M.; Neise, D.; Nessi-Tedaldi, F.; Overkemping, A.-K.; Pauss, F.; Renker, D.; Rhode, W.; Ribordy, M.; Rohlfs, R.; Röser, U.; Stucki, J.-P.; Thaele, J.; Tibolla, O.; Viertel, G.; Vogler, P.; Walter, R.; Warda, K.; Weitzel, Q.

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  18. Optimum design of the carbon fiber thin-walled baffle for the space-based camera

    NASA Astrophysics Data System (ADS)

    Yan, Yong; Song, Gu; Yuan, An; Jin, Guang

    2011-08-01

    The thin-walled baffle design of the space-based camera is an important job in the lightweight space camera research task for its stringent quality requirement and harsh mechanical environment especially for the thin-walled baffle of the carbon fiber design. In the paper, an especially thin-walled baffle of the carbon fiber design process was described and it is sound significant during the other thin-walled baffle design of the space camera. The designer obtained the design margin of the thin-walled baffle that structural stiffness and strength can tolerated belong to its development requirements through the appropriate use of the finite element analysis of the walled parameters influence sensitivity to its structural stiffness and strength. And the designer can determine the better optimization criterion of thin-walled baffle during the geometric parameter optimization process in such guiding principle. It sounds significant during the optimum design of the thin-walled baffle of the space camera. For structural stiffness and strength of the carbon fibers structure which can been designed, the effect of the optimization will be more remarkable though the optional design of the parameters chose. Combination of manufacture process and design requirements the paper completed the thin-walled baffle structure scheme selection and optimized the specific carbon fiber fabrication technology though the FEM optimization, and the processing cost and process cycle are retrenchment/saved effectively in the method. Meanwhile, the weight of the thin-walled baffle reduced significantly in meet the design requirements under the premise of the structure. The engineering prediction had been adopted, and the related result shows that the thin-walled baffle satisfied the space-based camera engineering practical needs very well, its quality reduced about 20%, the final assessment index of the thin-walled baffle were superior to the overall design requirements significantly. The design

  19. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    NASA Astrophysics Data System (ADS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-07-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an

  20. Microstructural probing of ferritic/martensitic steels using internal transmutation-based positron source

    NASA Astrophysics Data System (ADS)

    Krsjak, Vladimir; Dai, Yong

    2015-10-01

    This paper presents the use of an internal 44Ti/44Sc radioisotope source for a direct microstructural characterization of ferritic/martensitic (f/m) steels after irradiation in targets of spallation neutron sources. Gamma spectroscopy measurements show a production of ∼1MBq of 44Ti per 1 g of f/m steels irradiated at 1 dpa (displaced per atom) in the mixed proton-neutron spectrum at the Swiss spallation neutron source (SINQ). In the decay chain 44Ti → 44Sc → 44Ca, positrons are produced together with prompt gamma rays which enable the application of different positron annihilation spectroscopy (PAS) analyses, including lifetime and Doppler broadening spectroscopy. Due to the high production yield, long half-life and relatively high energy of positrons of 44Ti, this methodology opens up new potential for simple, effective and inexpensive characterization of radiation induced defects in f/m steels irradiated in a spallation target.

  1. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.

    PubMed

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  2. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    PubMed Central

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  3. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  4. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    PubMed Central

    Parodi, K; Ferrari, A; Sommerer, F; Paganetti, H

    2008-01-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography / computed tomography) imaging for in-vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modeling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield Unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  5. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  6. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  7. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera

    PubMed Central

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-01-01

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments. PMID:26404284

  8. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    NASA Astrophysics Data System (ADS)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  9. Camera Based Closed Loop Control for Partial Penetration Welding of Overlap Joints

    NASA Astrophysics Data System (ADS)

    Abt, F.; Heider, A.; Weber, R.; Graf, T.; Blug, A.; Carl, D.; Höfler, H.; Nicolosi, L.; Tetzlaff, R.

    Welding of overlap joints with partial penetration in automotive applications is a challenging process, since the laser power must be set very precisely to achieve a proper connection between the two joining partners without damaging the backside of the sheet stack. Even minor changes in welding conditions can lead to bad results. To overcome this problem a camera based closed loop control for partial penetration welding of overlap joints was developed. With this closed loop control it is possible to weld such configurations with a stable process result even under changing welding conditions.

  10. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  11. Novel fundus camera design

    NASA Astrophysics Data System (ADS)

    Dehoog, Edward A.

    A fundus camera a complex optical system that makes use of the principle of reflex free indirect ophthalmoscopy to image the retina. Despite being in existence as early as 1900's, little has changed in the design of a fundus camera and there is minimal information about the design principles utilized. Parameters and specifications involved in the design of fundus camera are determined and their affect on system performance are discussed. Fundus cameras incorporating different design methods are modeled and a performance evaluation based on design parameters is used to determine the effectiveness of each design strategy. By determining the design principles involved in the fundus camera, new cameras can be designed to include specific imaging modalities such as optical coherence tomography, imaging spectroscopy and imaging polarimetry to gather additional information about properties and structure of the retina. Design principles utilized to incorporate such modalities into fundus camera systems are discussed. Design, implementation and testing of a snapshot polarimeter fundus camera are demonstrated.

  12. Positron and Ion Migrations and the Attractive Interactions between like Ion Pairs in the Liquids: Based on Studies with Slow Positron Beam

    NASA Astrophysics Data System (ADS)

    Kanazawa, I.; Sasaki, T.; Yamada, K.; Imai, E.

    2014-04-01

    We have discussed positron and ion diffusions in liquids by using the gauge-invariant effection Lagrange density with the spontaneously broken density (the hedgehog-like density) with the internal non-linear gauge fields (Yaug-Mills gauge fields), and have presented the relation to the Hubbard-Onsager theory.

  13. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  14. Positron Physics

    NASA Technical Reports Server (NTRS)

    Drachman, Richard J.

    2003-01-01

    I will give a review of the history of low-energy positron physics, experimental and theoretical, concentrating on the type of work pioneered by John Humberston and the positronics group at University College. This subject became a legitimate subfield of atomic physics under the enthusiastic direction of the late Sir Harrie Massey, and it attracted a diverse following throughout the world. At first purely theoretical, the subject has now expanded to include high brightness beams of low-energy positrons, positronium beams, and, lately, experiments involving anti-hydrogen atoms. The theory requires a certain type of persistence in its practitioners, as well as an eagerness to try new mathematical and numerical techniques. I will conclude with a short summary of some of the most interesting recent advances.

  15. Replacing 16 mm film cameras with high definition digital cameras

    SciTech Connect

    Balch, K.S.

    1995-12-31

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne gun cameras, range tracking and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost effective solution. Film-based cameras still produce the best resolving capability, however, film development time, chemical disposal, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new camera from Kodak that has been designed to replace 16 mm high speed film cameras.

  16. Fast time-of-flight camera based surface registration for radiotherapy patient positioning

    SciTech Connect

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-15

    Purpose: This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. Methods: A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an ''ICP only'' strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. Results: The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 {+-} 1.08 mm and 0.07 deg. {+-} 0.05 deg., respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. Conclusions: The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface

  17. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  18. Calibration and disparity maps for a depth camera based on a four-lens device

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Colicchio, Bruno; Lauffenburger, Jean Philippe; Haeberlé, Olivier; Cudel, Christophe

    2015-11-01

    We propose a model of depth camera based on a four-lens device. This device is used for validating alternate approaches for calibrating multiview cameras and also for computing disparity or depth images. The calibration method arises from previous works, where principles of variable homography were extended for three-dimensional (3-D) measurement. Here, calibration is performed between two contiguous views obtained on the same image sensor. This approach leads us to propose a new approach for simplifying calibration by using the properties of the variable homography. Here, the second part addresses new principles for obtaining disparity images without any matching. A fast algorithm using a contour propagation algorithm is proposed without requiring structured or random pattern projection. These principles are proposed in a framework of quality control by vision, for inspection in natural illumination. By preserving scene photometry, some other standard controls, as for example calipers, shape recognition, or barcode reading, can be done conjointly with 3-D measurements. Approaches presented here are evaluated. First, we show that rapid calibration is relevant for devices mounted with multiple lenses. Second, synthetic and real experimentations validate our method for computing depth images.

  19. Full 3-D cluster-based iterative image reconstruction tool for a small animal PET camera

    NASA Astrophysics Data System (ADS)

    Valastyán, I.; Imrek, J.; Molnár, J.; Novák, D.; Balkay, L.; Emri, M.; Trón, L.; Bükki, T.; Kerek, A.

    2007-02-01

    Iterative reconstruction methods are commonly used to obtain images with high resolution and good signal-to-noise ratio in nuclear imaging. The aim of this work was to develop a scalable, fast, cluster based, fully 3-D iterative image reconstruction package for our small animal PET camera, the miniPET. The reconstruction package is developed to determine the 3-D radioactivity distribution from list mode type of data sets and it can also simulate noise-free projections of digital phantoms. We separated the system matrix generation and the fully 3-D iterative reconstruction process. As the detector geometry is fixed for a given camera, the system matrix describing this geometry is calculated only once and used for every image reconstruction, making the process much faster. The Poisson and the random noise sensitivity of the ML-EM iterative algorithm were studied for our small animal PET system with the help of the simulation and reconstruction tool. The reconstruction tool has also been tested with data collected by the miniPET from a line and a cylinder shaped phantom and also a rat.

  20. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  1. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera.

    PubMed

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  2. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  3. Real object-based 360-degree integral-floating display using multiple depth camera

    NASA Astrophysics Data System (ADS)

    Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam

    2015-03-01

    A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.

  4. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera.

    PubMed

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  5. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  6. Classification of Kiwifruit Grades Based on Fruit Shape Using a Single Camera

    PubMed Central

    Fu, Longsheng; Sun, Shipeng; Li, Rui; Wang, Shaojin

    2016-01-01

    This study aims to demonstrate the feasibility for classifying kiwifruit into shape grades by adding a single camera to current Chinese sorting lines equipped with weight sensors. Image processing methods are employed to calculate fruit length, maximum diameter of the equatorial section, and projected area. A stepwise multiple linear regression method is applied to select significant variables for predicting minimum diameter of the equatorial section and volume and to establish corresponding estimation models. Results show that length, maximum diameter of the equatorial section and weight are selected to predict the minimum diameter of the equatorial section, with the coefficient of determination of only 0.82 when compared to manual measurements. Weight and length are then selected to estimate the volume, which is in good agreement with the measured one with the coefficient of determination of 0.98. Fruit classification based on the estimated minimum diameter of the equatorial section achieves a low success rate of 84.6%, which is significantly improved using a linear combination of the length/maximum diameter of the equatorial section and projected area/length ratios, reaching 98.3%. Thus, it is possible for Chinese kiwifruit sorting lines to reach international standards of grading kiwifruit on fruit shape classification by adding a single camera. PMID:27376292

  7. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  8. A positioning system for forest diseases and pests based on GIS and PTZ camera

    NASA Astrophysics Data System (ADS)

    Wang, Z. B.; Wang, L. L.; Zhao, F. F.; Wang, C. B.

    2014-03-01

    Forest diseases and pests cause enormous economic losses and ecological damage every year in China. To prevent and control forest diseases and pests, the key is to get accurate information timely. In order to improve monitoring coverage rate and economize on manpower, a cooperative investigation model for forest diseases and pests is put forward. It is composed of video positioning system and manual labor reconnaissance with mobile GIS embedded in PDA. Video system is used to scan the disaster area, and is particularly effective on where trees are withered. Forest diseases prevention and control workers can check disaster area with PDA system. To support this investigation model, we developed a positioning algorithm and a positioning system. The positioning algorithm is based on DEM and PTZ camera. Moreover, the algorithm accuracy is validated. The software consists of 3D GIS subsystem, 2D GIS subsystem, video control subsystem and disaster positioning subsystem. 3D GIS subsystem makes positioning visual, and practically easy to operate. 2D GIS subsystem can output disaster thematic map. Video control subsystem can change Pan/Tilt/Zoom of a digital camera remotely, to focus on the suspected area. Disaster positioning subsystem implements the positioning algorithm. It is proved that the positioning system can observe forest diseases and pests in practical application for forest departments.

  9. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  10. Parkinson's disease assessment based on gait analysis using an innovative RGB-D camera system.

    PubMed

    Rocha, Ana Patrícia; Choupina, Hugo; Fernandes, José Maria; Rosas, Maria José; Vaz, Rui; Silva Cunha, João Paulo

    2014-01-01

    Movement-related diseases, such as Parkinson's disease (PD), progressively affect the motor function, many times leading to severe motor impairment and dramatic loss of the patients' quality of life. Human motion analysis techniques can be very useful to support clinical assessment of this type of diseases. In this contribution, we present a RGB-D camera (Microsoft Kinect) system and its evaluation for PD assessment. Based on skeleton data extracted from the gait of three PD patients treated with deep brain stimulation and three control subjects, several gait parameters were computed and analyzed, with the aim of discriminating between non-PD and PD subjects, as well as between two PD states (stimulator ON and OFF). We verified that among the several quantitative gait parameters, the variance of the center shoulder velocity presented the highest discriminative power to distinguish between non-PD, PD ON and PD OFF states (p = 0.004). Furthermore, we have shown that our low-cost portable system can be easily mounted in any hospital environment for evaluating patients' gait. These results demonstrate the potential of using a RGB-D camera as a PD assessment tool. PMID:25570653

  11. Portable Positron Measurement System (PPMS)

    SciTech Connect

    2011-01-01

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  12. Portable Positron Measurement System (PPMS)

    ScienceCinema

    None

    2013-05-28

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  13. Improved photo response non-uniformity (PRNU) based source camera identification.

    PubMed

    Cooper, Alan J

    2013-03-10

    The concept of using Photo Response Non-Uniformity (PRNU) as a reliable forensic tool to match an image to a source camera is now well established. Traditionally, the PRNU estimation methodologies have centred on a wavelet based de-noising approach. Resultant filtering artefacts in combination with image and JPEG contamination act to reduce the quality of PRNU estimation. In this paper, it is argued that the application calls for a simplified filtering strategy which at its base level may be realised using a combination of adaptive and median filtering applied in the spatial domain. The proposed filtering method is interlinked with a further two stage enhancement strategy where only pixels in the image having high probabilities of significant PRNU bias are retained. This methodology significantly improves the discrimination between matching and non-matching image data sets over that of the common wavelet filtering approach. PMID:23312587

  14. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  15. Real-time neural network based camera localization and its extension to mobile robot control.

    PubMed

    Choi, D H; Oh, S Y

    1997-06-01

    The feasibility of using neural networks for camera localization and mobile robot control is investigated here. This approach has the advantages of eliminating the laborious and error-prone process of imaging system modeling and calibration procedures. Basically, two different approaches of using neural networks are introduced of which one is a hybrid approach combining neural networks and the pinhole-based analytic solution while the other is purely neural network based. These techniques have been tested and compared through both simulation and real-time experiments and are shown to yield more precise localization than analytic approaches. Furthermore, this neural localization method is also shown to be directly applicable to the navigation control of an experimental mobile robot along the hallway purely guided by a dark wall strip. It also facilitates multi-sensor fusion through the use of multiple sensors of different types for control due to the network's capability of learning without models. PMID:9427102

  16. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras.

    PubMed

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  17. Scintillators for positron emission tomography

    SciTech Connect

    Moses, W.W.; Derenzo, S.E.

    1995-09-01

    Like most applications that utilize scintillators for gamma detection, Positron Emission Tomography (PET) desires materials with high light output, short decay time, and excellent stopping power that are also inexpensive, mechanically rugged, and chemically inert. Realizing that this ``ultimate`` scintillator may not exist, this paper evaluates the relative importance of these qualities and describes their impact on the imaging performance of PET. The most important PET scintillator quality is the ability to absorb 511 keV photons in a small volume, which affects the spatial resolution of the camera. The dominant factor is a short attenuation length ({le} 1.5 cm is required), although a high photoelectric fraction is also important (> 30% is desired). The next most important quality is a short decay time, which affects both the dead time and the coincidence timing resolution. Detection rates for single 511 keV photons can be extremely high, so decay times {le} 500 ns are essential to avoid dead time losses. In addition, positron annihilations are identified by time coincidence so {le}5 ns fwhm coincidence pair timing resolution is required to identify events with narrow coincidence windows, reducing contamination due to accidental coincidences. Current trends in PET cameras are toward septaless, ``fully-3D`` cameras, which have significantly higher count rates than conventional 2-D cameras and so place higher demands on scintillator decay time. Light output affects energy resolution, and thus the ability of the camera to identify and reject events where the initial 511 keV photon has undergone Compton scatter in the patient. The scatter to true event fraction is much higher in fully-3D cameras than in 2-D cameras, so future PET cameras would benefit from scintillators with a 511 keV energy resolution < 10--12% fwhm.

  18. Portable profilometer based on low-coherence interferometry and smart pixel camera

    NASA Astrophysics Data System (ADS)

    Salbut, Leszek; Pakuła, Anna; Tomczewski, Sławomir; Styk, Adam

    2010-09-01

    Although low coherence interferometers are commercially available (e.g., white light interferometers), they are generally quite bulky, expensive, and offer limited flexibility. In the paper the new portable profilometer based on low coherence interferometry is presented. In the device the white light diode with controlled spectrum shape is used in order to increase the zero order fringe contrast, what allows for its better and quicker localization. For image analysis the special type of CMOS matrix (called smart pixel camera), synchronized with reference mirror transducer, is applied. Due to hardware realization of the fringe contrast analysis, independently in each pixel, the time of measurement decreases significantly. High speed processing together with compact design allows that profilometer to be used as the portable device for both in and out door measurements. The capabilities of the designed profilometer are well illustrated by a few application examples.

  19. Design and fabrication of MEMS-based thermally-actuated image stabilizer for cell phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    A micro-electro-mechanical system (MEMS)-based image stabilizer is proposed to counteracting shaking in cell phone cameras. The proposed stabilizer (dimensions, 8.8 × 8.8 × 0.2 mm3) includes a two-axis decoupling XY stage and has sufficient strength to suspend an image sensor (IS) used for anti-shaking function. The XY stage is designed to send electrical signals from the suspended IS by using eight signal springs and 24 signal outputs. The maximum actuating distance of the stage is larger than 25 μm, which is sufficient to resolve the shaking problem. Accordingly, the applied voltage for the 25 μm moving distance is lower than 20 V; the dynamic resonant frequency of the actuating device is 4485 Hz, and the rising time is 21 ms.

  20. A rut measuring method based on laser triangulation with single camera

    NASA Astrophysics Data System (ADS)

    Ma, Yue; Zhang, Wen-hao; Li, Song; Wang, Hong

    2013-12-01

    Pavement rut is one of the major highway diseases. In this article, the method of measuring rut based on laser triangulation is created. The rut-resolution model is established to design the parameters of optical system and the laser profile of road image was extracted by median filter and wavelet transform. An accurate calibration method on large field of view consisting of 28 sub-FOVs calibration and road profile reconstruction is also created. The process of calibration experiment and a new calculation method of rut is described. The measurement results were showed, which concluded the static test of gauge block and the dynamic measurement on highway. The conclusion is with this method using single CCD camera and two semiconductor laser of 808nm wavelength can reach the accuracy of 1mm on rut measurement in tough circumstance.

  1. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  2. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  3. A novel method to measure the ambient aerosol phase function based on dual ccd-camera

    NASA Astrophysics Data System (ADS)

    Bian, Yuxuan; Zhao, Chunsheng; Tao, Jiangchuan; Kuang, Ye; Zhao, Gang

    2016-04-01

    Aerosol scattering phase function is a measure of the light intensity scattered from particles as a function of scattering angles. It's important for understanding the aerosol climate effects and remote sensing inversion analysis. In this study, a novel method to measure the ambient aerosol phase function is developed based on a dual charge-coupled device(ccd) camera laser detective system. An integrating nephelometer is used to correct the inversion result. The instrument was validated by both field and laboratory measurements of atmospheric aerosols. A Mie theory model was used with the measurements of particle number size distribution and mass concentration of black carbon to simulate the aerosol phase function for comparison with the values from the instrument. The comparison shows a great consistency.

  4. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves. PMID:22016249

  5. Cross-ratio-based line scan camera calibration using a planar pattern

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Wen, Gongjian; Qiu, Shaohua

    2016-01-01

    A flexible new technique is proposed to calibrate the geometric model of line scan cameras. In this technique, the line scan camera is rigidly coupled to a calibrated frame camera to establish a pair of stereo cameras. The linear displacements and rotation angles between the two cameras are fixed but unknown. This technique only requires the pair of stereo cameras to observe a specially designed planar pattern shown at a few (at least two) different orientations. At each orientation, a stereo pair is obtained including a linear array image and a frame image. Radial distortion of the line scan camera is modeled. The calibration scheme includes two stages. First, point correspondences are established from the pattern geometry and the projective invariance of cross-ratio. Second, with a two-step calibration procedure, the intrinsic parameters of the line scan camera are recovered from several stereo pairs together with the rigid transform parameters between the pair of stereo cameras. Both computer simulation and real data experiments are conducted to test the precision and robustness of the calibration algorithm, and very good calibration results have been obtained. Compared with classical techniques which use three-dimensional calibration objects or controllable moving platforms, our technique is affordable and flexible in close-range photogrammetric applications.

  6. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  7. Estimating the spatial position of marine mammals based on digital camera recordings.

    PubMed

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-02-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator-prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  8. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V.

    2016-06-01

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ( τ c ) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ( τ c ˜ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  9. Investigation of ionic conductivity of polymeric electrolytes based on poly (ether urethane) networks using positron probe

    NASA Astrophysics Data System (ADS)

    Peng, Z. L.; Wang, B.; Li, S. Q.; Wang, S. J.; Liu, H.; Xie, H. Q.

    1994-10-01

    Positron-lifetime measurements have been made for poly (ether urethane) undoped and doped with [LiClO 4]/[Unit]=0.05 in the temperature range of 120-340 K. The measured lifetime spectra were resolved into three components. The lifetime and the intensity of orthopositronium were used to evaluate the amount of the free volume in poly (ether urethane). It was found that the variation of ionic conductivity with temperature and salt concentration can be rationalised in terms of free volume consideration.

  10. Improvement of the GRACE star camera data based on the revision of the combination method

    NASA Astrophysics Data System (ADS)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data

  11. A novel virtual four-ocular stereo vision system based on single camera for measuring insect motion parameters

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Zhang, Guangjun; Chen, Dazhi

    2005-11-01

    A novel virtual four-ocular stereo measurement system based on single high speed camera is proposed for measuring double beating wings of a high speed flapping insect. The principle of virtual monocular system consisting of a few planar mirrors and a single high speed camera is introduced. The stereo vision measurement principle based on optic triangulation is explained. The wing kinematics parameters are measured. Results show that this virtual stereo system not only decreases system cost extremely but also is effective to insect motion measurement.

  12. A Novel Method of Object Detection from a Moving Camera Based on Image Matching and Frame Coupling

    PubMed Central

    Chen, Yong; Zhang, Rong hua; Shang, Lei

    2014-01-01

    A new method based on image matching and frame coupling to handle the problems of object detection caused by a moving camera and object motion is presented in this paper. First, feature points are extracted from each frame. Then, motion parameters can be obtained. Sub-images are extracted from the corresponding frame via these motion parameters. Furthermore, a novel searching method for potential orientations improves efficiency and accuracy. Finally, a method based on frame coupling is adopted, which improves the accuracy of object detection. The results demonstrate the effectiveness and feasibility of our proposed method for a moving object with changing posture and with a moving camera. PMID:25354301

  13. An energy-optimized collimator design for a CZT-based SPECT camera

    NASA Astrophysics Data System (ADS)

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2016-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radiotracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which is independent of the photon energy, performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, and 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial

  14. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    PubMed

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask. PMID:25402920

  15. A pnCCD-based, fast direct single electron imaging camera for TEM and STEM

    NASA Astrophysics Data System (ADS)

    Ryll, H.; Simson, M.; Hartmann, R.; Holl, P.; Huth, M.; Ihle, S.; Kondo, Y.; Kotula, P.; Liebel, A.; Müller-Caspary, K.; Rosenauer, A.; Sagawa, R.; Schmidt, J.; Soltau, H.; Strüder, L.

    2016-04-01

    We report on a new camera that is based on a pnCCD sensor for applications in scanning transmission electron microscopy. Emerging new microscopy techniques demand improved detectors with regards to readout rate, sensitivity and radiation hardness, especially in scanning mode. The pnCCD is a 2D imaging sensor that meets these requirements. Its intrinsic radiation hardness permits direct detection of electrons. The pnCCD is read out at a rate of 1,150 frames per second with an image area of 264 x 264 pixel. In binning or windowing modes, the readout rate is increased almost linearly, for example to 4000 frames per second at 4× binning (264 x 66 pixel). Single electrons with energies from 300 keV down to 5 keV can be distinguished due to the high sensitivity of the detector. Three applications in scanning transmission electron microscopy are highlighted to demonstrate that the pnCCD satisfies experimental requirements, especially fast recording of 2D images. In the first application, 65536 2D diffraction patterns were recorded in 70 s. STEM images corresponding to intensities of various diffraction peaks were reconstructed. For the second application, the microscope was operated in a Lorentz-like mode. Magnetic domains were imaged in an area of 256 x 256 sample points in less than 37 seconds for a total of 65536 images each with 264 x 132 pixels. Due to information provided by the two-dimensional images, not only the amplitude but also the direction of the magnetic field could be determined. In the third application, millisecond images of a semiconductor nanostructure were recorded to determine the lattice strain in the sample. A speed-up in measurement time by a factor of 200 could be achieved compared to a previously used camera system.

  16. Observation of Passive and Explosive Emissions at Stromboli with a Ground-based Hyperspectral TIR Camera

    NASA Astrophysics Data System (ADS)

    Smekens, J. F.; Mathieu, G.

    2015-12-01

    Scientific imaging techniques have progressed at a fast pace in the recent years, thanks in part to great improvements in detector technology, and through our ability to process large amounts of complex data using sophisticated software. Broadband thermal cameras are ubiquitously used for permanent monitoring of volcanic activity, and have been used in a multitude of scientific applications, from tracking ballistics to studying the thermal evolution lava flow fields and volcanic plumes. In parallel, UV cameras are now used at several volcano observatories to quantify daytime sulfur dioxide (SO2) emissions at very high frequency. In this work we present the results the first deployment of a ground-based Thermal Infrared (TIR) Hyperspectral Imaging System (Telops Hyper-Cam LW) for the study of passive and explosive volcanic activity at Stromboli volcano, Italy. The instrument uses a Michelson spectrometer and Fourier Transform Infrared Spectrometry to produce hyperspectral datacubes of a scene (320x256 pixels) in the range 7.7-11.8 μm, with a spectral resolution of up to 0.25 cm-1 and at frequencies of ~10 Hz. The activity at Stromboli is characterized by explosions of small magnitude, often containing significant amounts of gas and ash, separated by periods of quiescent degassing of 10-60 minutes. With our dataset, spanning about 5 days of monitoring, we are able to detect and track temporal variations of SO2 and ash emissions during both daytime and nighttime. It ultimately allows for the quantification of the mass of gas and ash ejected during and between explosive events. Although the high price and power consumption of the instrument are obstacles to its deployment as a monitoring tool, this type of data sets offers unprecedented insight into the dynamic processes taking place at Stromboli, and could lead to a better understanding of the eruptive mechanisms at persistently active systems in general.

  17. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging.

    PubMed

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S; Gould, Robert G

    2011-10-10

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from -40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a (99m)Tc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at -25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of (99m)Tc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at -25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  18. Recreational use assessment of water-based activities, using time-lapse construction cameras.

    PubMed

    Sunger, Neha; Teske, Sondra S; Nappier, Sharon; Haas, Charles N

    2012-01-01

    Recreational exposure to surface waters during periods of increased pathogen concentration may lead to a significantly higher risk of illness. However, estimates of elementary exposure factors necessary to evaluate health risk (i.e., usage distributions and exposure durations) are not available for many non-swimming water-related activities. No prior studies have assessed non-swimming water exposure with respect to factors leading to impaired water quality from increased pathogen concentration, such as weather condition (rain events produce increased runoff and sewer overflows) and type of day (heavy recreational periods). We measured usage patterns and evaluated the effect of weather and type of day at eight water sites located within Philadelphia, by using a novel "time lapse photography" technology during three peak recreational seasons (May-September) 2008-2010. Camera observations validated with simultaneous in-person surveys exhibited a strong correlation (R(2)=0.81 to 0.96) between the two survey techniques, indicating that the application of remote photography in collecting human exposure data was appropriate. Recreational activities usage varied more on a temporal basis than due to inclement weather. Only 14% (6 out of 44) of the site-specific activity combinations showed dry weather preference, whereas 41.5% (17 out of 41) of the combinations indicated greater usage on weekends as compared with weekday. In general, the log normal distribution described the playing and wading duration distribution, while the gamma distribution was the best fit for fishing durations. Remote photography provided unbiased, real-time human exposure data and was less personnel intensive compared with traditional survey methods. However, there are potential limitations associated with remote surveillance data related to its limited view. This is the first study to report that time lapse cameras can be successfully applied to assess water-based human recreational patterns and can

  19. An energy-optimized collimator design for a CZT-based SPECT camera

    PubMed Central

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2015-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radio-tracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which independent of the photon energy performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial collimators

  20. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from −40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at −25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at −25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  1. Camera-based platform and sensor motion tracking for data fusion in a landmine detection system

    NASA Astrophysics Data System (ADS)

    van der Mark, Wannes; van den Heuvel, Johan C.; den Breejen, Eric; Groen, Frans C. A.

    2003-09-01

    Vehicles that serve in the role as landmine detection robots could be an important tool for demining former conflict areas. On the LOTUS platform for humanitarian demining, different sensors are used to detect a wide range of landmine types. Reliable and accurate detection depends on correctly combining the observations from the different sensors on the moving platform. Currently a method based on odometry is used to merge the readings from the sensors. In this paper a vision based approach is presented which can estimate the relative sensor pose and position together with the vehicle motion. To estimate the relative position and orientation of sensors, techniques from camera calibration are used. The platform motion is estimated from tracked features on the ground. A new approach is presented which can reduce the influence of tracking errors or other outliers on the accuracy of the ego-motion estimate. Overall, the new vision based approach for sensor localization leads to better estimates then the current odometry based method.

  2. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera

    PubMed Central

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  3. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera.

    PubMed

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  4. Design of an Event-Driven Random-Access-Windowing CCD-Based Camera

    NASA Technical Reports Server (NTRS)

    Monacos, Steve P.; Lam, Raymond K.; Portillo, Angel A.; Ortiz, Gerardo G.

    2003-01-01

    Commercially available cameras are not design for the combination of single frame and high-speed streaming digital video with real-time control of size and location of multiple regions-of-interest (ROI). A new control paradigm is defined to eliminate the tight coupling between the camera logic and the host controller. This functionality is achieved by defining the indivisible pixel read out operation on a per ROI basis with in-camera time keeping capability. This methodology provides a Random Access, Real-Time, Event-driven (RARE) camera for adaptive camera control and is will suited for target tracking applications requiring autonomous control of multiple ROI's. This methodology additionally provides for reduced ROI read out time and higher frame rates compared to the original architecture by avoiding external control intervention during the ROI read out process.

  5. Uas Based Tree Species Identification Using the Novel FPI Based Hyperspectral Cameras in Visible, NIR and SWIR Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Honkavaara, E.; Tuominen, S.; Saari, H.; Pölönen, I.; Hakala, T.; Viljanen, N.; Soukkamäki, J.; Näkki, I.; Ojanen, H.; Reinikainen, J.

    2016-06-01

    Unmanned airborne systems (UAS) based remote sensing offers flexible tool for environmental monitoring. Novel lightweight Fabry-Perot interferometer (FPI) based, frame format, hyperspectral imaging in the spectral range from 400 to 1600 nm was used for identifying different species of trees in a forest area. To the best of the authors' knowledge, this was the first research where stereoscopic, hyperspectral VIS, NIR, SWIR data is collected for tree species identification using UAS. The first results of the analysis based on fusion of two FPI-based hyperspectral imagers and RGB camera showed that the novel FPI hyperspectral technology provided accurate geometric, radiometric and spectral information in a forested scene and is operational for environmental remote sensing applications.

  6. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  7. Target detection for low cost uncooled MWIR cameras based on empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Piñeiro-Ave, José; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Artés-Rodríguez, Antonio

    2014-03-01

    In this work, a novel method for detecting low intensity fast moving objects with low cost Medium Wavelength Infrared (MWIR) cameras is proposed. The method is based on background subtraction in a video sequence obtained with a low density Focal Plane Array (FPA) of the newly available uncooled lead selenide (PbSe) detectors. Thermal instability along with the lack of specific electronics and mechanical devices for canceling the effect of distortion make background image identification very difficult. As a result, the identification of targets is performed in low signal to noise ratio (SNR) conditions, which may considerably restrict the sensitivity of the detection algorithm. These problems are addressed in this work by means of a new technique based on the empirical mode decomposition, which accomplishes drift estimation and target detection. Given that background estimation is the most important stage for detecting, a previous denoising step enabling a better drift estimation is designed. Comparisons are conducted against a denoising technique based on the wavelet transform and also with traditional drift estimation methods such as Kalman filtering and running average. The results reported by the simulations show that the proposed scheme has superior performance.

  8. The Japanese Positron Factory

    NASA Astrophysics Data System (ADS)

    Okada, S.; Sunaga, H.; Kaneko, H.; Takizawa, H.; Kawasuso, A.; Yotsumoto, K.; Tanaka, R.

    1999-06-01

    The Positron Factory has been planned at Japan Atomic Energy Research Institute (JAERI). The factory is expected to produce linac-based monoenergetic positron beams having world-highest intensities of more than 1010e+/sec, which will be applied for R&D of materials science, biotechnology and basic physics & chemistry. In this article, results of the design studies are demonstrated for the following essential components of the facilities: 1) Conceptual design of a high-power electron linac with 100 MeV in beam energy and 100 kW in averaged beam power, 2) Performance tests of the RF window in the high-power klystron and of the electron beam window, 3) Development of a self-driven rotating electron-to-positron converter and the performance tests, 4) Proposal of multi-channel beam generation system for monoenergetic positrons, with a series of moderator assemblies based on a newly developed Monte Carlo simulation and the demonstrative experiment, 5) Proposal of highly efficient moderator structures, 6) Conceptual design of a local shield to suppress the surrounding radiation and activation levels.

  9. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  10. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  11. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  12. Comparison of a new laser beam wound camera and a digital photoplanimetry-based method for wound measurement in horses.

    PubMed

    Van Hecke, L L; De Mil, T A; Haspeslagh, M; Chiers, K; Martens, A M

    2015-03-01

    The aim of this study was to compare the accuracy, precision, inter- and intra-operator reliability of a new laser beam (LB) wound camera and a digital photoplanimetry-based (DPB) method for measuring the dimensions of equine wounds. Forty-one wounds were created on equine cadavers. The area, circumference, maximum depth and volume of each wound were measured four times with both techniques by two operators. A silicone cast was made of each wound and served as the reference standard to measure the wound dimensions. The DPB method had a higher accuracy and precision in determining the wound volume compared with the LB camera, which had a higher accuracy in determining the wound area and maximum depth and better precision in determining the area and circumference. The LB camera also had a significantly higher overall inter-operator reliability for measuring the wound area, circumference and volume. In contrast, the DPB method had poor intra-operator reliability for the wound circumference. The LB camera was more user-friendly than the DPB method. The LB wound camera is recommended as the better objective method to assess the dimensions of wounds in horses, despite its poorer performance for the measurement of wound volume. However, if the wound measurements are performed by one operator on cadavers or animals under general anaesthesia, the DPB method is a less expensive and valid alternative. PMID:25665920

  13. 15. ELEVATED CAMERA STAND, SHOWING LINE OF CAMERA STANDS PARALLEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. ELEVATED CAMERA STAND, SHOWING LINE OF CAMERA STANDS PARALLEL TO SLED TRACK. Looking west southwest down Camera Road. - Edwards Air Force Base, South Base Sled Track, Edwards Air Force Base, North of Avenue B, between 100th & 140th Streets East, Lancaster, Los Angeles County, CA

  14. Extended-field coverage hyperspectral camera based on a single-pixel technique.

    PubMed

    Jin, Senlin; Hui, Wangwei; Liu, Bo; Ying, Cuifeng; Liu, Dongqi; Ye, Qing; Zhou, Wenyuan; Tian, Jianguo

    2016-06-20

    A spectral single-pixel imaging system facilitates effective image compression, but the imaging region is limited by its single detector. This paper presents a hyperspectral camera that allows extended-field coverage to be collected by one detector. Compressive data of a large field of view is achieved by our highly sensitive detection camera, which can be extended to near-infrared or infrared spectral monitoring. We acquire a hyperspectral datacube of 256×256 spatial pixels and 3 nm spectral resolution at a sampling rate of 25%. Finally, we apply our camera to monitoring fruit freshness nondestructively by differentiating a banana's ripeness over time. PMID:27409103

  15. Positron emission mammography imaging

    SciTech Connect

    Moses, William W.

    2003-10-02

    This paper examines current trends in Positron Emission Mammography (PEM) instrumentation and the performance tradeoffs inherent in them. The most common geometry is a pair of parallel planes of detector modules. They subtend a larger solid angle around the breast than conventional PET cameras, and so have both higher efficiency and lower cost. Extensions to this geometry include encircling the breast, measuring the depth of interaction (DOI), and dual-modality imaging (PEM and x-ray mammography, as well as PEM and x-ray guided biopsy). The ultimate utility of PEM may not be decided by instrument performance, but by biological and medical factors, such as the patient to patient variation in radiotracer uptake or the as yet undetermined role of PEM in breast cancer diagnosis and treatment.

  16. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    NASA Astrophysics Data System (ADS)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  17. Research on simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Liu, Qi; Cui, Xuenan

    2014-09-01

    To satisfy the needs for testing video processor of satellite remote sensing cameras, a design is provided to achieve a simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA. The correctness of video processor FPGA logic can be verified even without CCD signals or analog to digital convertor. Two Xilinx Virtex FPGAs are adopted to make a center unit, the logic of A/D digital data generating and data processing are developed with VHDL. The RS-232 interface is used to receive commands from the host computer, and different types of data are generated and outputted depending on the commands. Experimental results show that the simulation and verification system is flexible and can work well. The simulation and verification system meets the requirements of testing video processors for several different types of satellite remote sensing cameras.

  18. A depth camera for natural human-computer interaction based on near-infrared imaging and structured light

    NASA Astrophysics Data System (ADS)

    Liu, Yue; Wang, Liqiang; Yuan, Bo; Liu, Hao

    2015-08-01

    Designing of a novel depth camera is presented, which targets close-range (20-60cm) natural human-computer interaction especially for mobile terminals. In order to achieve high precision through the working range, a two-stepping method is employed to match the near infrared intensity image to absolute depth in real-time. First, we use structured light achieved by an 808nm laser diode and a Dammann grating to coarsely quantize the output space of depth values into discrete bins. Then use a learning-based classification forest algorithm to predict the depth distribution over these bins for each pixel in the image. The quantitative experimental results show that this depth camera has 1% precision over range of 20-60cm, which show that the camera suit resource-limited and low-cost application.

  19. A large surface X-ray camera based on XPAD3/CdTe single chip hybrids

    NASA Astrophysics Data System (ADS)

    Cassol, F.; Blanc, N.; Bompard, F.; Boudet, N.; Boursier, Y.; Buton, C.; Clémens, J.-C.; Dawiec, A.; Debarbieux, F.; Delpierre, P.; Dupont, M.; Graber-Bolis, J.; Hustache, S.; Morel, C.; Perez-Ponce, H.; Portal, L.; Vigeolas, E.

    2015-11-01

    The XPAD3 chip bump-bonded to a Si sensor has been widely used in preclinical micro-computed tomography and in synchrotron experiments. Although the XPAD3 chip is linear up to 60 keV, the performance of the XPAD3/Si hybrid detector is limited to energies below 30 keV, for which detection efficiencies remain above 20%. To overcome this limitation on detection efficiency in order to access imaging at higher energies, we decided to develop a camera based on XPAD3 single chips bump-bonded to high-Z CdTe sensors. We will first present the construction of this new camera, from the first tests of the single chip hybrids to the actual mechanical assembly. Then, we will show first images and stability tests performed on the D2AM beam line at ESRF synchrotron facility with the fully assembled camera.

  20. Secondary caries detection with a novel fluorescence-based camera system in vitro

    NASA Astrophysics Data System (ADS)

    Brede, Olivier; Wilde, Claudia; Krause, Felix; Frentzen, Matthias; Braun, Andreas

    2010-02-01

    The aim of the study was to assess the ability of a fluorescence based optical system to detect secondary caries. The optical detecting system (VistaProof) illuminates the tooth surfaces with blue light emitted by high power GaN-LEDs at 405 nm. Employing this almost monochromatic excitation, fluorescence is analyzed using a RGB camera chip and encoded in color graduations (blue - red - orange - yellow) by a software (DBSWIN), indicating the degree of caries destruction. 31 freshly extracted teeth with existing fillings and secondary caries were cleaned, excavated and refilled with the same kind of restorative material. 19 of them were refilled with amalgam, 12 were refilled with a composite resin. Each step was analyzed with the respective software and analyzed statistically. Differences were considered as statistically significant at p<0.05. There was no difference between measurements at baseline and after cleaning (Mann Whitney, p>0.05). There was a significant difference between baseline measurements of the teeth primarily filled with composite resins and the refilled situation (p=0.014). There was also a significant difference between the non-excavated and the excavated group (Composite p=0.006, Amalgam p=0.018). The in vitro study showed, that the fluorescence based system allows detecting secondary caries next to composite resin fillings but not next to amalgam restorations. Cleaning of the teeth is not necessary, if there is no visible plaque. Further studies have to show, whether the system shows the same promising results in vivo.

  1. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  2. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  3. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  4. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous

  5. Polarization encoded color camera.

    PubMed

    Schonbrun, Ethan; Möller, Guðfríður; Di Caprio, Giuseppe

    2014-03-15

    Digital cameras would be colorblind if they did not have pixelated color filters integrated into their image sensors. Integration of conventional fixed filters, however, comes at the expense of an inability to modify the camera's spectral properties. Instead, we demonstrate a micropolarizer-based camera that can reconfigure its spectral response. Color is encoded into a linear polarization state by a chiral dispersive element and then read out in a single exposure. The polarization encoded color camera is capable of capturing three-color images at wavelengths spanning the visible to the near infrared. PMID:24690806

  6. The integration of digital camera derived images with a computer based diabetes register for use in retinal screening.

    PubMed

    Taylor, D J; Jacob, J S; Tooke, J E

    2000-07-01

    Exeter district provides a retinal screening service based on a mobile non-mydriatic camera operated by a dedicated retinal screener visiting general practices on a 2-yearly cycle. Digital attachments to eye cameras can now provide a cost effective alternative to the use of film in population based eye screening programmes. Whilst the manufacturers of digital cameras provide a database for the storage of pictures, the images do not as yet interface readily with the rest of the patient's computer held data or allow for a sophisticated grading, reporting and administration system. The system described is a development of the Exeter diabetes register (EXSYST) which can import digitally derived pictures from either Ris-Lite TM and Imagenet TM camera systems or scanned Polaroids Pictures can be reported by the screener, checked by a consultant ophthalmologist via the hospital network, and a report, consisting of colour pictures, map of relevant pathology and referral recommendations produced. This concise report can be hard copied inexpensively on a high resolution ink-jet printer to be returned to the patient's general practitioner. Eye images remain available within the hospital diabetes centre computer network to facilitate shared care. This integrated system would form an ideal platform for the addition of computer based pathology recognition and total paperless transmission when suitable links to GP surgeries become available. PMID:10837903

  7. A new testing method of SNR for cooled CCD imaging camera based on stationary wavelet transform

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Liu, Qianshun; Yu, Feihong

    2013-08-01

    Cooled CCD (charge coupled device) imaging camera has found wide application in the field of astronomy, color photometry, spectroscopy, medical imaging, densitometry, chemiluminescence and epifluorescence imaging. A Cooled CCD (CCCD) imaging camera differs from traditional CCD/CMOS imaging camera in that Cooled CCD imaging camera can get high resolution image even in the low illumination environment. SNR (signal noise ratio) is most popular parameter of digital image quality evaluation. Many researchers have proposed various SNR testing methods for traditional CCD imaging camera, however, which is seldom suitable to Cooled CCD imaging camera because of different main noise source. In this paper, a new testing method of SNR is proposed to evaluate the quality of image captured by Cooled CCD. Stationary Wavelet Transform (SWT) is introduced in the testing method for getting more exact image SNR value. The method proposed take full advantage of SWT in the image processing, which makes the experiment results accuracy and reliable. To further refining SNR testing results, the relation between SNR and integration time is also analyzed in this article. The experimental results indicate that the testing method proposed accords with the SNR model of CCCD. In addition, the testing values for one system are about one value, which show that the proposed testing method is robust.

  8. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter

  9. Intense source of slow positrons

    NASA Astrophysics Data System (ADS)

    Perez, P.; Rosowsky, A.

    2004-10-01

    We describe a novel design for an intense source of slow positrons based on pair production with a beam of electrons from a 10 MeV accelerator hitting a thin target at a low incidence angle. The positrons are collected with a set of coils adapted to the large production angle. The collection system is designed to inject the positrons into a Greaves-Surko trap (Phys. Rev. A 46 (1992) 5696). Such a source could be the basis for a series of experiments in fundamental and applied research and would also be a prototype source for industrial applications, which concern the field of defect characterization in the nanometer scale.

  10. [Fundamentals of positron emission tomography].

    PubMed

    Ostertag, H

    1989-07-01

    Positron emission tomography is a modern radionuclide method of measuring physiological quantities or metabolic parameters in vivo. The method is based on: (1) radioactive labelling with positron emitters; (2) the coincidence technique for the measurement of the annihilation radiation following positron decay; (3) analysis of the data measured using biological models. The basic aspects and problems of the method are discussed. The main fields of future research are the synthesis of new labelled compounds and the development of mathematical models of the biological processes to be investigated. PMID:2667029

  11. Realization of the FPGA based TDI algorithm in digital domain for CMOS cameras

    NASA Astrophysics Data System (ADS)

    Tao, Shuping; Jin, Guang; Zhang, Xuyan; Qu, Hongsong

    2012-10-01

    In order to make the CMOS image sensors suitable for space high resolution imaging applications, a new method realizing TDI in digital domain by FPGA is proposed in this paper, which improves the imaging mode for area array CMOS sensors. The TDI algorithm accumulates the corresponding pixels of adjoining frames in digital domain, so the gray values increase by M times, where M is for the integration number, and the image's quality in signal-to-noise ratio can be improved. In addition, the TDI optimization algorithm is discussed. Firstly, the signal storage is optimized by 2 slices of external RAM, where memory depth expanding and the table tennis operation mechanism are used. Secondly, the FIFO operation mechanism reduces the reading and writing operation on memory by M×(M-1) times, It saves so much signal transfer time as is proportional to the square of integration number M2, that the frame frequency is able to increase greatly. At last, the CMOS camera based on TDI in digital domain is developed, and the algorithm is validated by experiments on it.

  12. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  13. A regional density distribution based wide dynamic range algorithm for infrared camera systems

    NASA Astrophysics Data System (ADS)

    Park, Gyuhee; Kim, Yongsung; Joung, Shichang; Shin, Sanghoon

    2014-10-01

    Forward Looking InfraRed (FLIR) imaging system has been widely used for both military and civilian purposes. Military applications include target acquisition and tracking, night vision system. Civilian applications include thermal efficiency analysis, short-ranged wireless communication, weather forecasting and other various applications. The dynamic range of FLIR imaging system is larger than one of commercial display. Generally, auto gain controlling and contrast enhancement algorithm are applied to FLIR imaging system. In IR imaging system, histogram equalization and plateau equalization is generally used for contrast enhancement. However, they have no solution about the excessive enhancing when luminance histogram has been distributed in specific narrow region. In this paper, we proposed a Regional Density Distribution based Wide Dynamic Range algorithm for Infrared Camera Systems. Depending on the way of implementation, the result of WDR is quite different. Our approach is single frame type WDR algorithm for enhancing the contrast of both dark and white detail without loss of bins of histogram with real-time processing. The significant change in luminance caused by conventional contrast enhancement methods may introduce luminance saturation and failure in object tracking. Proposed method guarantees both the effective enhancing in contrast and successive object tracking. Moreover, since proposed method does not using multiple images on WDR, computation complexity might be significantly reduced in software / hardware implementation. The experimental results show that proposed method has better performance compared with conventional Contrast enhancement methods.

  14. Potential of Uav-Based Laser Scanner and Multispectral Camera Data in Building Inspection

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Weller, C.

    2016-06-01

    Conventional building inspection of bridges, dams or large constructions in general is rather time consuming and often cost expensive due to traffic closures and the need of special heavy vehicles such as under-bridge inspection units or other large lifting platforms. In consideration that, an unmanned aerial vehicle (UAV) will be more reliable and efficient as well as less expensive and simpler to operate. The utilisation of UAVs as an assisting tool in building inspections is obviously. Furthermore, light-weight special sensors such as infrared and thermal cameras as well as laser scanner are available and predestined for usage on unmanned aircraft systems. Such a flexible low-cost system is realized in the ADFEX project with the goal of time-efficient object exploration, monitoring and damage detection. For this purpose, a fleet of UAVs, equipped with several sensors for navigation, obstacle avoidance and 3D object-data acquisition, has been developed and constructed. This contribution deals with the potential of UAV-based data in building inspection. Therefore, an overview of the ADFEX project, sensor specifications and requirements of building inspections in general are given. On the basis of results achieved in practical studies, the applicability and potential of the UAV system in building inspection will be presented and discussed.

  15. Efficient Smart CMOS Camera Based on FPGAs Oriented to Embedded Image Processing

    PubMed Central

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L.; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  16. Image-based correction of the light dilution effect for SO2 camera measurements

    NASA Astrophysics Data System (ADS)

    Campion, Robin; Delgado-Granados, Hugo; Mori, Toshiya

    2015-07-01

    Ultraviolet SO2 cameras are increasingly used in volcanology because of their ability to remotely measure the 2D distribution of SO2 in volcanic plumes, at a high frequency. However, light dilution, i.e., the scattering of ambient photons within the instrument's field of view (FoV) on air parcels located between the plume and the instrument, induces a systematic underestimation of the measurements, whose magnitude increases with distance, SO2 content, atmospheric pressure and turbidity. Here we describe a robust and straightforward method to quantify and correct this effect. We retrieve atmospheric scattering coefficients based on the contrast attenuation between the sky and the increasingly distant slope of the volcanic edifice. We illustrate our method with a case study at Etna volcano, where difference between corrected and uncorrected emission rates amounts to 40% to 80%, and investigate the temporal variations of the scattering coefficient during 1 h of measurements on Etna. We validate the correction method at Popocatépetl volcano by performing measurements of the same plume at different distances from the volcano. Finally, we reported the atmospheric scattering coefficients for several volcanoes at different latitudes and altitudes.

  17. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ASC's HRC, and even WFC3/UVIS.

  18. Efficient smart CMOS camera based on FPGAs oriented to embedded image processing.

    PubMed

    Bravo, Ignacio; Baliñas, Javier; Gardel, Alfredo; Lázaro, José L; Espinosa, Felipe; García, Jorge

    2011-01-01

    This article describes an image processing system based on an intelligent ad-hoc camera, whose two principle elements are a high speed 1.2 megapixel Complementary Metal Oxide Semiconductor (CMOS) sensor and a Field Programmable Gate Array (FPGA). The latter is used to control the various sensor parameter configurations and, where desired, to receive and process the images captured by the CMOS sensor. The flexibility and versatility offered by the new FPGA families makes it possible to incorporate microprocessors into these reconfigurable devices, and these are normally used for highly sequential tasks unsuitable for parallelization in hardware. For the present study, we used a Xilinx XC4VFX12 FPGA, which contains an internal Power PC (PPC) microprocessor. In turn, this contains a standalone system which manages the FPGA image processing hardware and endows the system with multiple software options for processing the images captured by the CMOS sensor. The system also incorporates an Ethernet channel for sending processed and unprocessed images from the FPGA to a remote node. Consequently, it is possible to visualize and configure system operation and captured and/or processed images remotely. PMID:22163739

  19. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System.

    PubMed

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a "fuzzy mass" of tufted fibers into a regular mass of untwisted fibers, named "tow". During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765

  20. Hyperspectral characterization of fluorophore diffusion in human skin using a sCMOS based hyperspectral camera

    NASA Astrophysics Data System (ADS)

    Hernandez-Palacios, J.; Haug, I. J.; Grimstad, Ø.; Randeberg, L. L.

    2011-07-01

    Hyperspectral fluorescence imaging is a modality combining high spatial and spectral resolution with increased sensitivity for low photon counts. The main objective of the current study was to investigate if this technique is a suitable tool for characterization of diffusion properties in human skin. This was done by imaging fluorescence from Alexa 488 in ex vivo human skin samples using an sCMOS based hyperspectral camera. Pre-treatment with acetone, DMSO and mechanical micro-needling of the stratum corneum created variation in epidermal permeability between the measured samples. Selected samples were also stained using fluorescence labelled biopolymers. The effect of fluorescence enhancers on transdermal diffusion could be documented from the collected data. Acetone was found to have an enhancing effect on the transport, and the results indicate that the biopolymers might have a similar effect, The enhancement from these compounds were not as prominent as the effect of mechanical penetration of the sample using a micro-needling device. Hyperspectral fluorescence imaging has thus been proven to be an interesting tool for characterization of fluorophore diffusion in ex vivo skin samples. Further work will include repetition of the measurements in a shorter time scale and mathematical modeling of the diffusion process to determine the diffusivity in skin for the compounds in question.

  1. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  2. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space. PMID:20935713

  3. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    PubMed Central

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  4. Positron microanalysis with high intensity beams

    SciTech Connect

    Hulett, L.D. Jr.; Donohue, D.L.

    1990-01-01

    One of the more common applications for a high intensity slow positron facility will be microanalysis of solid materials. In the first section of this paper some examples are given of procedures that can be developed. Since most of the attendees of this workshop are experts in positron spectroscopy, comprehensive descriptions will be omitted. With the exception of positron emission microscopy, most of the procedures will be based on those already in common use with broad beams. The utility of the methods have all been demonstrated, but material scientists use very few of them because positron microbeams are not generally available. A high intensity positron facility will make microbeams easier to obtain and partially alleviate this situation. All microanalysis techniques listed below will have a common requirement, which is the ability to locate the microscopic detail or area of interest and to focus the positron beam exclusively on it. The last section of this paper is a suggestion of how a high intensity positron facility might be designed so as to have this capability built in. The method will involve locating the specimen by scanning it with the microbeam of positrons and inducing a secondary electron image that will immediately reveal whether or not the positron beam is striking the proper portion of the specimen. This scanning positron microscope' will be a somewhat prosaic analog of the conventional SEM. It will, however, be an indispensable utility that will enhance the practicality of positron microanalysis techniques. 6 refs., 1 fig.

  5. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.

    PubMed

    Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong

    2016-04-01

    Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications. PMID:26672033

  6. An Empirical Pixel-Based Correction for Imperfect CTE. I. HST's Advanced Camera for Surveys

    NASA Astrophysics Data System (ADS)

    Anderson, Jay; Bedin, Luigi R.

    2010-09-01

    We use an empirical approach to characterize the effect of charge-transfer efficiency (CTE) losses in images taken with the Wide-Field Channel of the Advanced Camera for Surveys (ACS). The study is based on profiles of warm pixels in 168 dark exposures taken between 2009 September and October. The dark exposures allow us to explore charge traps that affect electrons when the background is extremely low. We develop a model for the readout process that reproduces the observed trails out to 70 pixels. We then invert the model to convert the observed pixel values in an image into an estimate of the original pixel values. We find that when we apply this image-restoration process to science images with a variety of stars on a variety of background levels, it restores flux, position, and shape. This means that the observed trails contain essentially all of the flux lost to inefficient CTE. The Space Telescope Science Institute is currently evaluating this algorithm with the aim of optimizing it and eventually providing enhanced data products. The empirical procedure presented here should also work for other epochs (e.g., pre-SM4), though the parameters may have to be recomputed for the time when ACS was operated at a higher temperature than the current -81°C. Finally, this empirical approach may also hold promise for other instruments, such as WFPC2, STIS, the ACS's HRC, and even WFC3/UVIS. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  7. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    PubMed Central

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  8. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-01-01

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system. PMID:25494350

  9. Enhancing spatial resolution of 18F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine

    NASA Astrophysics Data System (ADS)

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I.; Shi, Kuangyu

    2015-07-01

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by 18F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [18F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  10. Fast time-lens-based line-scan single-pixel camera with multi-wavelength source

    PubMed Central

    Guo, Qiang; Chen, Hongwei; Weng, Zhiliang; Chen, Minghua; Yang, Sigang; Xie, Shizhong

    2015-01-01

    A fast time-lens-based line-scan single-pixel camera with multi-wavelength source is proposed and experimentally demonstrated in this paper. A multi-wavelength laser instead of a mode-locked laser is used as the optical source. With a diffraction grating and dispersion compensating fibers, the spatial information of an object is converted into temporal waveforms which are then randomly encoded, temporally compressed and captured by a single-pixel photodetector. Two algorithms (the dictionary learning algorithm and the discrete cosine transform-based algorithm) for image reconstruction are employed, respectively. Results show that the dictionary learning algorithm has greater capability to reduce the number of compressive measurements than the DCT-based algorithm. The effective imaging frame rate increases from 200 kHz to 1 MHz, which shows a significant improvement in imaging speed over conventional single-pixel cameras. PMID:26417527

  11. MICROCARD: a micro-camera based on a circular diffraction grating for MWIR and LWIR imagery

    NASA Astrophysics Data System (ADS)

    Druart, Guillaume; Guérineau, Nicolas; Tauvy, Michel; Rommeluère, Sylvain; Primot, Jérôme; Deschamps, Joël; Fendler, Manuel; Cigna, Jean-Charles; Taboury, Jean

    2008-09-01

    Circular diffraction gratings (also called diffractive axicons) are optical components producing achromatic non-diffracting beams. They thus produce a focal line rather than a focal point for classical lenses. We have recently shown in the visible spectral range that this property can be used to design a simple imaging system with a long depth of focus and a linear variable zoom by using and translating a diffractive axicon as the only component. We have then adapted this principle for the mid-wavelength infrared (MWIR) spectral range and the long-wavelength infrared (LWIR) spectral range. A LWIR low-cost micro-camera, called MICROCARD, has been designed and realized. First images from this camera will be shown. Moreover a way to design a compact MWIR micro-camera with moveable parts integrated directly into the cryostat will be presented.

  12. Performances of a solid streak camera based on conventional CCD with nanosecond time resolution

    NASA Astrophysics Data System (ADS)

    Wang, Bo; Bai, Yonglin; Zhu, Bingli; Gou, Yongsheng; Xu, Peng; Bai, XiaoHong; Liu, Baiyu; Qin, Junjun

    2015-02-01

    Imaging systems with high temporal resolution are needed to study rapid physical phenomena ranging from shock waves, including extracorporeal shock waves used for surgery, to diagnostics of laser fusion and fuel injection in internal combustion engines. However, conventional streak cameras use a vacuum tube making thus fragile, cumbersome and expensive. Here we report an CMOS streak camera project consists in reproducing completely this streak camera functionality with a single CMOS chip. By changing the mode of charge transfer of CMOS image sensor, fast photoelectric diagnostics of single point with linear CMOS and high-speed line scanning with array CMOS sensor can be achieved respectively. A fast photoelectric diagnostics system has been designed and fabricated to investigate the feasibility of this method. Finally, the dynamic operation of the sensors is exposed. Measurements show a sample time of 500 ps and a time resolution better than 2 ns.

  13. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations.

    PubMed

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  14. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  15. Methodology for lens transmission measurement in the 8-13 micron waveband: integrating sphere versus camera-based

    NASA Astrophysics Data System (ADS)

    Schuster, Norbert; Verplancke, Jan; Salethaiyan, Bergeron; Franks, John

    2014-05-01

    Transmission is a key parameter in describing an IR-lens, but is also often the subject of controversy. One reason is the misinterpretation of "transmission" in infrared camera practice. If the camera lens is replaced by an alternative one the signal will be affected by two parameters: proportional to the square of the effective aperture based F-number and linearly to the transmission. The measure to collect energy is defined as the Energy Throughput ETP, and the signal level of the IR-camera is proportional to ETP. Most published lens transmission values are based on spectrophotometric measurement of plane-parallel witness pieces obtained from coating processes. Published aperture based F-numbers derive very often from ray tracing values in the on-axis bundle. The following contribution is about transmission measurement. It highlights the bulk absorption and coating issues of infrared lenses. Two different setups are built and tested, an Integrating Sphere (IS)-based setup and a Camera-Based (CB) setup. The comparison of the two principles also clarifies the impact of the F-number. One difficulty in accurately estimating lens transmission lies in measuring the ratio between the signal of ray bundles deviated by the lens under test and the signal of non-deviated ray bundles without lens (100% transmission). There are many sources for errors and deviations in LWIR-region including: background radiation, reflection from "rough" surfaces, and unexpected transmission bands. Care is taken in the set up that measured signals with and without the lens are consistent and reproducible. Reference elements such as uncoated lenses are used for calibration of both setups. When solid angle-based radiometric relationships are included, both setups yield consistent transmission values. Setups and their calibration will be described and test results on commercially available lenses will be published.

  16. Computer-vision-based weed identification of images acquired by 3CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Yun; He, Yong; Fang, Hui

    2006-09-01

    Selective application of herbicide to weeds at an earlier stage in crop growth is an important aspect of site-specific management of field crops. For approaches more adaptive in developing the on-line weed detecting application, more researchers involves in studies on image processing techniques for intensive computation and feature extraction tasks to identify the weeds from the other crops and soil background. This paper investigated the potentiality of applying the digital images acquired by the MegaPlus TM MS3100 3-CCD camera to segment the background soil from the plants in question and further recognize weeds from the crops using the Matlab script language. The image of the near-infrared waveband (center 800 nm; width 65 nm) was selected principally for segmenting soil and identifying the cottons from the thistles was achieved based on their respective relative area (pixel amount) in the whole image. The results show adequate recognition that the pixel proportion of soil, cotton leaves and thistle leaves were 78.24%(-0.20% deviation), 16.66% (+ 2.71% SD) and 4.68% (-4.19% SD). However, problems still exists by separating and allocating single plants for their clustering in the images. The information in the images acquired via the other two channels, i.e., the green and the red bands, need to be extracted to help the crop/weed discrimination. More optical specimens should be acquired for calibration and validation to establish the weed-detection model that could be effectively applied in fields.

  17. An upgraded camera-based imaging system for mapping venous blood oxygenation in human skin tissue

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhang, Xiao; Qiu, Lina; Leotta, Daniel F.

    2016-07-01

    A camera-based imaging system was previously developed for mapping venous blood oxygenation in human skin. However, several limitations were realized in later applications, which could lead to either significant bias in the estimated oxygen saturation value or poor spatial resolution in the map of the oxygen saturation. To overcome these issues, an upgraded system was developed using improved modeling and image processing algorithms. In the modeling, Monte Carlo (MC) simulation was used to verify the effectiveness of the ratio-to-ratio method for semi-infinite and two-layer skin models, and then the relationship between the venous oxygen saturation and the ratio-to-ratio was determined. The improved image processing algorithms included surface curvature correction and motion compensation. The curvature correction is necessary when the imaged skin surface is uneven. The motion compensation is critical for the imaging system because surface motion is inevitable when the venous volume alteration is induced by cuff inflation. In addition to the modeling and image processing algorithms in the upgraded system, a ring light guide was used to achieve perpendicular and uniform incidence of light. Cross-polarization detection was also adopted to suppress surface specular reflection. The upgraded system was applied to mapping of venous oxygen saturation in the palm, opisthenar and forearm of human subjects. The spatial resolution of the oxygenation map achieved is much better than that of the original system. In addition, the mean values of the venous oxygen saturation for the three locations were verified with a commercial near-infrared spectroscopy system and were consistent with previously published data.

  18. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  19. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  20. Dry imaging cameras

    PubMed Central

    Indrajit, IK; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-01-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow. PMID:21799589

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Video-based realtime IMU-camera calibration for robot navigation

    NASA Astrophysics Data System (ADS)

    Petersen, Arne; Koch, Reinhard

    2012-06-01

    This paper introduces a new method for fast calibration of inertial measurement units (IMU) with cameras being rigidly coupled. That is, the relative rotation and translation between the IMU and the camera is estimated, allowing for the transfer of IMU data to the cameras coordinate frame. Moreover, the IMUs nuisance parameters (biases and scales) and the horizontal alignment of the initial camera frame are determined. Since an iterated Kalman Filter is used for estimation, information on the estimations precision is also available. Such calibrations are crucial for IMU-aided visual robot navigation, i.e. SLAM, since wrong calibrations cause biases and drifts in the estimated position and orientation. As the estimation is performed in realtime, the calibration can be done using a freehand movement and the estimated parameters can be validated just in time. This provides the opportunity of optimizing the used trajectory online, increasing the quality and minimizing the time effort for calibration. Except for a marker pattern, used for visual tracking, no additional hardware is required. As will be shown, the system is capable of estimating the calibration within a short period of time. Depending on the requested precision trajectories of 30 seconds to a few minutes are sufficient. This allows for calibrating the system at startup. By this, deviations in the calibration due to transport and storage can be compensated. The estimation quality and consistency are evaluated in dependency of the traveled trajectories and the amount of IMU-camera displacement and rotation misalignment. It is analyzed, how different types of visual markers, i.e. 2- and 3-dimensional patterns, effect the estimation. Moreover, the method is applied to mono and stereo vision systems, providing information on the applicability to robot systems. The algorithm is implemented using a modular software framework, such that it can be adopted to altered conditions easily.

  3. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ernő; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  4. Ventilation/Perfusion Positron Emission Tomography—Based Assessment of Radiation Injury to Lung

    SciTech Connect

    Siva, Shankar; Hardcastle, Nicholas; Kron, Tomas; Bressel, Mathias; Callahan, Jason; MacManus, Michael P.; Shaw, Mark; Plumridge, Nikki; Hicks, Rodney J.; Steinfort, Daniel; Ball, David L.; Hofman, Michael S.

    2015-10-01

    Purpose: To investigate {sup 68}Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). Methods and Materials: In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation and perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). Results: A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r{sup 2}=0.99, P<.01), with ventilation strongly negatively linear (r{sup 2}=0.95, P<.01). Logistic models did not provide a better fit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Conclusions: Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET

  5. Comparison of - and Mutual Informaton Based Calibration of Terrestrial Laser Scanner and Digital Camera for Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Omidalizarandi, M.; Neumann, I.

    2015-12-01

    In the current state-of-the-art, geodetic deformation analysis of natural and artificial objects (e.g. dams, bridges,...) is an ongoing research in both static and kinematic mode and has received considerable interest by researchers and geodetic engineers. In this work, due to increasing the accuracy of geodetic deformation analysis, a terrestrial laser scanner (TLS; here the Zoller+Fröhlich IMAGER 5006) and a high resolution digital camera (Nikon D750) are integrated to complementarily benefit from each other. In order to optimally combine the acquired data of the hybrid sensor system, a highly accurate estimation of the extrinsic calibration parameters between TLS and digital camera is a vital preliminary step. Thus, the calibration of the aforementioned hybrid sensor system can be separated into three single calibrations: calibration of the camera, calibration of the TLS and extrinsic calibration between TLS and digital camera. In this research, we focus on highly accurate estimating extrinsic parameters between fused sensors and target- and targetless (mutual information) based methods are applied. In target-based calibration, different types of observations (image coordinates, TLS measurements and laser tracker measurements for validation) are utilized and variance component estimation is applied to optimally assign adequate weights to the observations. Space resection bundle adjustment based on the collinearity equations is solved using Gauss-Markov and Gauss-Helmert model. Statistical tests are performed to discard outliers and large residuals in the adjustment procedure. At the end, the two aforementioned approaches are compared and advantages and disadvantages of them are investigated and numerical results are presented and discussed.

  6. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  7. Modelling Positron Interactions with Matter

    NASA Astrophysics Data System (ADS)

    Garcia, G.; Petrovic, Z.; White, R.; Buckman, S.

    2011-05-01

    In this work we link fundamental measurements of positron interactions with biomolecules, with the development of computer codes for positron transport and track structure calculations. We model positron transport in a medium from a knowledge of the fundamental scattering cross section for the atoms and molecules comprising the medium, combined with a transport analysis based on statistical mechanics and Monte-Carlo techniques. The accurate knowledge of the scattering is most important at low energies, a few tens of electron volts or less. The ultimate goal of this work is to do this in soft condensed matter, with a view to ultimately developing a dosimetry model for Positron Emission Tomography (PET). The high-energy positrons first emitted by a radionuclide in PET may well be described by standard formulas for energy loss of charged particles in matter, but it is incorrect to extrapolate these formulas to low energies. Likewise, using electron cross-sections to model positron transport at these low energies has been shown to be in serious error due to the effects of positronium formation. Work was supported by the Australian Research Council, the Serbian Government, and the Ministerio de Ciencia e Innovación, Spain.

  8. Space Camera

    NASA Technical Reports Server (NTRS)

    1983-01-01

    Nikon's F3 35mm camera was specially modified for use by Space Shuttle astronauts. The modification work produced a spinoff lubricant. Because lubricants in space have a tendency to migrate within the camera, Nikon conducted extensive development to produce nonmigratory lubricants; variations of these lubricants are used in the commercial F3, giving it better performance than conventional lubricants. Another spinoff is the coreless motor which allows the F3 to shoot 140 rolls of film on one set of batteries.

  9. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators.

    PubMed

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom's surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ε  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm(2), averaged group variances were  ±3.0 cm(2). The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  10. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators

    NASA Astrophysics Data System (ADS)

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom’s surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ɛ  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm2, averaged group variances were  ±3.0 cm2. The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  11. Design and development of a position-sensitive γ-camera for SPECT imaging based on PCI electronics

    NASA Astrophysics Data System (ADS)

    Spanoudaki, V.; Giokaris, N. D.; Karabarbounis, A.; Loudos, G. K.; Maintas, D.; Papanicolas, C. N.; Paschalis, P.; Stiliaris, E.

    2004-07-01

    A position-sensitive γ-camera is being currently designed at IASA. This camera will be used experimentally (development mode) in order to obtain an integrated knowledge of its function and perhaps to improve its performance in parallel with an existing one, which has shown a very good performance in phantom, small animal, SPECT technique and is currently being tested for clinical applications. The new system is a combination of a PSPMT (Hamamatsu, R2486-05) and a PMT for simultaneous or independent acquisition of energy and position information, respectively. The resistive chain technique resulting in two signals at each ( X, Y) direction will perform the readout of the PSPMT's anode signals; the system is based on PCI electronics. Status of the system's development and the ongoing progress is presented.

  12. Photometric-based recovery of illuminant-free color images using a red-green-blue digital camera

    NASA Astrophysics Data System (ADS)

    Luis Nieves, Juan; Plata, Clara; Valero, Eva M.; Romero, Javier

    2012-01-01

    Albedo estimation has traditionally been used to make computational simulations of real objects under different conditions, but as yet no device is capable of measuring albedo directly. The aim of this work is to introduce a photometric-based color imaging framework that can estimate albedo and can reproduce the appearance both indoors and outdoors of images under different lights and illumination geometry. Using a calibration sample set composed of chips made of the same material but different colors and textures, we compare two photometric-stereo techniques, one of them avoiding the effect of shadows and highlights in the image and the other ignoring this constraint. We combined a photometric-stereo technique and a color-estimation algorithm that directly relates the camera sensor outputs with the albedo values. The proposed method can produce illuminant-free images with good color accuracy when a three-channel red-green-blue (RGB) digital camera is used, even outdoors under solar illumination.

  13. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  14. Investigation on a small FoV gamma camera based on LaBr 3:Ce continuous crystal

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Bennati, P.; Cinti, M. N.; Vittorini, F.; Scafè, R.; Lo Meo, S.; Navarria, F. L.; Moschini, G.; Orsolini Cencelli, V.; De Notaristefani, F.

    2009-12-01

    Recently scintillating crystals with high light yield coupled to photodetectors with high quantum efficiency have been opening a new way to make gamma cameras with superior performances based on continuous crystals. In this work we propose the analysis of a gamma camera based on a continuous LaBr3:Ce crystal coupled to a multi-anodes photomultiplier tube (MA-PMT). In particular we take into account four detector configurations, different in crystal thicknesses and assembling. We utilize a new position algorithm to reduce the position non linearity affecting intrinsic spatial resolution of small FoV gamma cameras when standard Anger algorithm is applied. The experimental data are obtained scanning the detectors with 0.4 mm collimated 99 mTc source, at 1.5 mm step. An improvement in position linearity and spatial resolution of about a factor two is obtained with the new algorithm. The best values in terms of spatial resolution were 0.90 mm, 0.95 mm and 1.80 mm for integral assembled, 4.0 mm thick and 10 mm thick LaBr3:Ce crystal respectively.

  15. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal. PMID:22163620

  16. A passive terahertz video camera based on lumped element kinetic inductance detectors

    NASA Astrophysics Data System (ADS)

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A. R.; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ˜0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics.

  17. Pixel response non-uniformity correction for multi-TDICCD camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhai, Guofang

    2013-10-01

    A non-uniformity correction algorithm is proposed and implemented on a Field-Programmable Gate Array (FPGA) hardware platform to solve a pixel response non-uniformity(PRNU) problem of multi Time Delay and Integration Charge Couple Device(TDICCD) camera. The non-uniformity are introduced and the synthetical correction algorithm is presented, in which the two-point correction method is used in a single channel, gain averaging correction method among multi-channel and the sceneadaptive correction method among multi-TDICCD. Then, the correction algorithm is designed. Finally, analyzing the FPGA ability for fix-point processing, the correction algorithm is optimized, and implemented on FPGA. Testing results indicate that the non-uniformity can be decreased from 8.27% to 0.51% for three TDICCDs camera's images with the proposed correction algorithm, proving that this correction algorithm is with high real-time performance, great engineering realization and satisfaction for the system requirements.

  18. A Reaction-Diffusion-Based Coding Rate Control Mechanism for Camera Sensor Networks

    PubMed Central

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal. PMID:22163620

  19. On-orbit calibration approach for star cameras based on the iteration method with variable weights.

    PubMed

    Wang, Mi; Cheng, Yufeng; Yang, Bo; Chen, Xiao

    2015-07-20

    To perform efficient on-orbit calibration for star cameras, we developed an attitude-independent calibration approach for global optimization and noise removal by least-square estimation using multiple star images, with which the optimal principal point, focal length, and the high-order focal plane distortion can be obtained in one step in full consideration of the interaction among star camera parameters. To avoid the problem when stars could be misidentified in star images, an iteration method with variable weights is introduced to eliminate the influence of misidentified star pairs. The approach can increase the precision of least-square estimation and use fewer star images. The proposed approach has been well verified to be precise and robust in three experiments. PMID:26367824

  20. A passive terahertz video camera based on lumped element kinetic inductance detectors.

    PubMed

    Rowe, Sam; Pascale, Enzo; Doyle, Simon; Dunscombe, Chris; Hargrave, Peter; Papageorgio, Andreas; Wood, Ken; Ade, Peter A R; Barry, Peter; Bideaud, Aurélien; Brien, Tom; Dodd, Chris; Grainger, William; House, Julian; Mauskopf, Philip; Moseley, Paul; Spencer, Locke; Sudiwala, Rashmi; Tucker, Carole; Walker, Ian

    2016-03-01

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)--designed originally for far-infrared astronomy--as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequency domain multiplexing electronics. PMID:27036756

  1. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    PubMed

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm. PMID:22163404

  2. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  3. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  4. Split ring resonator based THz-driven electron streak camera featuring femtosecond resolution

    PubMed Central

    Fabiańska, Justyna; Kassier, Günther; Feurer, Thomas

    2014-01-01

    Through combined three-dimensional electromagnetic and particle tracking simulations we demonstrate a THz driven electron streak camera featuring a temporal resolution on the order of a femtosecond. The ultrafast streaking field is generated in a resonant THz sub-wavelength antenna which is illuminated by an intense single-cycle THz pulse. Since electron bunches and THz pulses are generated with parts of the same laser system, synchronization between the two is inherently guaranteed. PMID:25010060

  5. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  6. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  7. Auto-measurement system of aerial camera lens' resolution based on orthogonal linear CCD

    NASA Astrophysics Data System (ADS)

    Zhao, Yu-liang; Zhang, Yu-ye; Ding, Hong-yi

    2010-10-01

    The resolution of aerial camera lens is one of the most important camera's performance indexes. The measurement and calibration of resolution are important test items in in maintenance of camera. The traditional method that is observing resolution panel of collimator rely on human's eyes using microscope and doing some computing. The method is of low efficiency and susceptible to artificial factors. The measurement results are unstable, too. An auto-measurement system of aerial camera lens' resolution, which uses orthogonal linear CCD sensor as the detector to replace reading microscope, is introduced. The system can measure automatically and show result real-timely. In order to measure the smallest diameter of resolution panel which could be identified, two orthogonal linear CCD is laid on the imaging plane of measured lens and four intersection points are formed on the orthogonal linear CCD. A coordinate system is determined by origin point of the linear CCD. And a circle is determined by four intersection points. In order to obtain the circle's radius, firstly, the image of resolution panel is transformed to pulse width of electric signal which is send to computer through amplifying circuit and threshold comparator and counter. Secondly, the smallest circle would be extracted to do measurement. The circle extraction made using of wavelet transform which has character of localization in the domain of time and frequency and has capability of multi-scale analysis. Lastly, according to the solution formula of lens' resolution, we could obtain the resolution of measured lens. The measuring precision on practical measurement is analyzed, and the result indicated that the precision will be improved when using linear CCD instead of reading microscope. Moreover, the improvement of system error is determined by the pixel's size of CCD. With the technique of CCD developed, the pixel's size will smaller, the system error will be reduced greatly too. So the auto

  8. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  9. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  10. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  11. Positron emission tomography: physics, instrumentation, and image analysis.

    PubMed

    Porenta, G

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources, PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and user-friendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. PMID:7941595

  12. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  13. A study of defects in iron-based binary alloys by the Mössbauer and positron annihilation spectroscopies

    SciTech Connect

    Idczak, R. Konieczny, R.; Chojcan, J.

    2014-03-14

    The room temperature positron annihilation lifetime spectra and {sup 57}Fe Mössbauer spectra were measured for pure Fe as well as for iron-based Fe{sub 1−x}Re{sub x}, Fe{sub 1−x}Os{sub x}, Fe{sub 1−x}Mo{sub x}, and Fe{sub 1−x}Cr{sub x} solid solutions, where x is in the range between 0.01 and 0.05. The measurements were performed in order to check if the known from the literature, theoretical calculations on the interactions between vacancies and solute atoms in iron can be supported by the experimental data. The vacancies were created during formation and further mechanical processing of the iron systems under consideration so the spectra mentioned above were collected at least twice for each studied sample synthesized in an arc furnace— after cold rolling to the thickness of about 40 μm as well as after subsequent annealing at 1270 K for 2 h. It was found that only in Fe and the Fe-Cr system the isolated vacancies thermally generated at high temperatures are not observed at the room temperature and cold rolling of the materials leads to creation of another type of vacancies which were associated with edge dislocations. In the case of other cold-rolled systems, positrons detect vacancies of two types mentioned above and Mössbauer nuclei “see” the vacancies mainly in the vicinity of non-iron atoms. This speaks in favour of the suggestion that in iron matrix the solute atoms of Os, Re, and Mo interact attractively with vacancies as it is predicted by theoretical computations and the energy of the interaction is large enough for existing the pairs vacancy-solute atom at the room temperature. On the other hand, the corresponding interaction for Cr atoms is either repulsive or attractive but smaller than that for Os, Re, and Mo atoms. The latter is in agreement with the theoretical calculations.

  14. Imaging performance comparison between a LaBr3: Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera.

    PubMed

    Russo, P; Mettivier, G; Pani, R; Pellegrini, R; Cinti, M N; Bennati, P

    2009-04-01

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr3: Ce scintillator continuous crystal (49 x 49 x 5 mm3) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14 x 14 x 1 mm3) with 256 x 256 square pixels and a pitch of 55 microm, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 microm, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported. PMID:19472638

  15. Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.

    2013-09-01

    A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.

  16. On-Line Detection of Defects on Fruit by Machinevision Systems Based on Three-Color-Cameras Systems

    NASA Astrophysics Data System (ADS)

    Xul, Qiaobao; Zou, Xiaobo; Zhao, Jiewen

    How to identify apple stem-ends and calyxes from defects is still a challenging project due to the complexity of the process. It is know that the stem-ends and calyxes could not appear at the same image. Therefore, a contaminated apple distinguishing method is developed in this article. That is, if there are two or more doubtful blobs on an applés image, the apple is contaminated one. There is no complex imaging process and pattern recognition in this method, because it is only need to find how many blobs (including the stem-ends and calyxes) in an applés image. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of external defects. On this system, the fruits placed on rollers are rotating while moving, and each camera which placed on the line grabs 3 images from an apple. After the apple segmented from the black background by multi-thresholds method, defect's segmentation and counting is performed on the applés images. Good separation between normal and contaminated apples was obtained for threecamera system (94.5%), comparing to one-camera system (63.3%), twocamera system (83.7%). The disadvantage of this method is that it could not distinguish different defects types. Defects of apples, such as bruising, scab, fungal growth, and disease, are treated as the same.

  17. Development of a non-delay line constant fraction discriminator based on the Padé approximant for time-of-flight positron emission tomography scanners

    NASA Astrophysics Data System (ADS)

    Kim, S. Y.; Ko, G. B.; Kwon, S. I.; Lee, J. S.

    2015-01-01

    In positron emission tomography, the constant fraction discriminator (CFD) circuit is used to acquire accurate arrival times for the annihilation photons with minimum sensitivity to time walk. As the number of readout channels increases, it becomes difficult to use conventional CFDs because of the large amount of space required for the delay line part of the circuit. To make the CFD compact, flexible, and easily controllable, a non-delay-line CFD based on the Padé approximant is proposed. The non-delay-line CFD developed in this study is shown to have timing performance that is similar to that of a conventional delay-line-based CFD in terms of the coincidence resolving time of a fast photomultiplier tube detector. This CFD can easily be applied to various positron emission tomography system designs that contain high-density detectors with multi-channel structures.

  18. A field-based technique for the longitudinal profiling of ultrarelativistic electron or positron bunches down to lengths of {le}10 microns

    SciTech Connect

    Tatchyn, R.

    1993-05-01

    Present and future generations of particle accelerating and storage machines are expected to develop ever-decreasing electron/positron bunch lengths, down to 100 {mu} and beyond. In this paper a method for measuring the longitudinal profiles of ultrashort (1000 {mu} {approx} 10 {mu}) bunches, based on: (1) the extreme field compaction attained by ultrarelativistic particles, and (2) the reduction of the group velocity of a visible light pulse in a suitably-chosen dielectric medium, is outline.

  19. The color measurement system for spot color printing basing multispectral camera

    NASA Astrophysics Data System (ADS)

    Liu, Nanbo; Jin, Weiqi; Huang, Qinmei; Song, Li

    2014-11-01

    Color measurement and control of printing has been an important issue in computer vision technology . In the past, people have used density meter and spectrophotometer to measure the color of printing product. For the color management of 4 color press, by these kind meters, people can measure the color data from color bar printed at the side of sheet, then do ink key presetting. This way have wide application in printing field. However, it can not be used in the case that is to measure the color of spot color printing and printing pattern directly. With the development of multispectral image acquisition, it makes possible to measure the color of printing pattern in any area of the pattern by CCD camera than can acquire the whole image of sheet in high resolution. This essay give a way to measure the color of printing by multispectral camera in the process of printing. A 12 channel spectral camera with high intensity white LED illumination that have driven by a motor, scans the printing sheet. Then we can get the image, this image can include color and printing quality information of each pixel, LAB value and CMYK value of each pixel can be got by reconstructing the reflectance spectra of printing image. By this data processing, we can measure the color of spot color printing and control it. Through the spot test in the printing plant, the results show this way can get not only the color bar density value but also ROI color value. By the value, we can do ink key presetting, that makes it true to control the spot color automatically in high precision.

  20. Demonstration of First 9 Micron cutoff 640 x 486 GaAs Based Quantum Well Infrared PhotoDetector (QWIP) Snap-Shot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Maker, P. D.; Muller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long waelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NEAT, uniformity, and operability.

  1. Infrared Camera

    NASA Technical Reports Server (NTRS)

    1997-01-01

    A sensitive infrared camera that observes the blazing plumes from the Space Shuttle or expendable rocket lift-offs is capable of scanning for fires, monitoring the environment and providing medical imaging. The hand-held camera uses highly sensitive arrays in infrared photodetectors known as quantum well infrared photo detectors (QWIPS). QWIPS were developed by the Jet Propulsion Laboratory's Center for Space Microelectronics Technology in partnership with Amber, a Raytheon company. In October 1996, QWIP detectors pointed out hot spots of the destructive fires speeding through Malibu, California. Night vision, early warning systems, navigation, flight control systems, weather monitoring, security and surveillance are among the duties for which the camera is suited. Medical applications are also expected.

  2. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  3. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  4. Microlens assembly error analysis for light field camera based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Li, Sai; Yuan, Yuan; Zhang, Hao-Wei; Liu, Bin; Tan, He-Ping

    2016-08-01

    This paper describes numerical analysis of microlens assembly errors in light field cameras using the Monte Carlo method. Assuming that there were no manufacturing errors, home-built program was used to simulate images of coupling distance error, movement error and rotation error that could appear during microlens installation. By researching these images, sub-aperture images and refocus images, we found that the images present different degrees of fuzziness and deformation for different microlens assembly errors, while the subaperture image presents aliasing, obscured images and other distortions that result in unclear refocus images.

  5. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  6. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography.

    PubMed

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology. PMID:27273293

  7. KEK-IMSS Slow Positron Facility

    NASA Astrophysics Data System (ADS)

    Hyodo, T.; Wada, K.; Yagishita, A.; Kosuge, T.; Saito, Y.; Kurihara, T.; Kikuchi, T.; Shirakawa, A.; Sanami, T.; Ikeda, M.; Ohsawa, S.; Kakihara, K.; Shidara, T.

    2011-12-01

    The Slow Positron Facility at the Institute of Material Structure Science (IMSS) of High Energy Accelerator Research Organization (KEK) is a user dedicated facility with an energy tunable (0.1 - 35 keV) slow positron beam produced by a dedicated 55MeV linac. The present beam line branches have been used for the positronium time-of-flight (Ps-TOF) measurements, the transmission positron microscope (TPM) and the photo-detachment of Ps negative ions (Ps-). During the year 2010, a reflection high-energy positron diffraction (RHEPD) measurement station is going to be installed. The slow positron generator (converter/ moderator) system will be modified to get a higher slow positron intensity, and a new user-friendly beam line power-supply control and vacuum monitoring system is being developed. Another plan for this year is the transfer of a 22Na-based slow positron beam from RIKEN. This machine will be used for the continuous slow positron beam applications and for the orientation training of those who are interested in beginning researches with a slow positron beam.

  8. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    PubMed

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore. PMID:27410133

  9. Visual odometry based on structural matching of local invariant features using stereo camera sensor.

    PubMed

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  10. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  11. Design and analysis of filter-based optical systems for spectral responsivity estimation of digital video cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Jian, Hong-Da; Yeh, Zong-Mu; Cheng, Chin-Pao

    2004-02-01

    For estimating spectral responsivities of digital video cameras, a filter-based optical system is designed with sophisticated filter selections, in this paper. The filter consideration in the presence of noise is central to the optical systems design, since the spectral filters primarily prescribe the structure of the perturbed system. A theoretical basis is presented to confirm that sophisticated filter selections can make this system as insensitive to noise as possible. Also, we propose a filter selection method based on the orthogonal-triangular (QR) decomposition with column pivoting (QRCP). To investigate the noise effects, we assess the estimation errors between the actual and estimated spectral responsivities, with the different signal-to-noise ratio (SNR) levels of an eight-bit/channel camera. Simulation results indicate that the proposed method yields satisfactory estimation accuracy. That is, the filter-based optical system with the spectral filters selected from the QRCP-based method is much less sensitive to noise than those with other filters from different selections.

  12. Undulator-Based Production of Polarized Positrons, A Proposal for the 50-GeV Beam in the FFTB

    SciTech Connect

    G. Alexander; P. Anthony; V. Bharadwaj; Yu.K. Batygin; T. Behnke; S. Berridge; G.R. Bower; W. Bugg; R. Carr; E. Chudakov; J.E. Clendenin; F.J. Decker; Yu. Efremenko; T. Fieguth; K. Flottmann; M. Fukuda; V. Gharibyan; T. Handler; T. Hirose; R.H. Iverson; Yu. Kamyshkov; H. Kolanoski; T. Lohse; Chang-guo Lu; K.T. McDonald; N. Meyners; R. Michaels; A.A. Mikhailichenko; K. Monig; G. Moortgat-Pick; M. Olson; T. Omori; D. Onoprienko; N. Pavel; R. Pitthan; M. Purohit; L. Rinolfi; K.P. Schuler; J.C. Sheppard; S. Spanier; A. Stahl; Z.M. Szalata; J. Turner; D. Walz; A. Weidemann; J. Weisend

    2003-06-01

    The full exploitation of the physics potential of future linear colliders such as the JLC, NLC, and TESLA will require the development of polarized positron beams. In the proposed scheme of Balakin and Mikhailichenko [1] a helical undulator is employed to generate photons of several MeV with circular polarization which are then converted in a relatively thin target to generate longitudinally polarized positrons. This experiment, E-166, proposes to test this scheme to determine whether such a technique can produce polarized positron beams of sufficient quality for use in future linear colliders. The experiment will install a meter-long, short-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) at SLAC. A low-emittance 50-GeV electron beam passing through this undulator will generate circularly polarized photons with energies up to 10 MeV. These polarized photons are then converted to polarized positrons via pair production in thin targets. Titanium and tungsten targets, which are both candidates for use in linear colliders, will be tested. The experiment will measure the flux and polarization of the undulator photons, and the spectrum and polarization of the positrons produced in the conversion target, and compare the measurement results to simulations. Thus the proposed experiment directly tests for the first time the validity of the simulation programs used for the physics of polarized pair production in finite matter, in particular the effects of multiple scattering on polarization. Successful comparison of the experimental results to the simulations will lead to greater confidence in the proposed designs of polarized positrons sources for the next generation of linear colliders. This experiment requests six-weeks of time in the FFTB beam line: three weeks for installation and setup and three weeks of beam for data taking. A 50-GeV beam with about twice the SLC emittance at a repetition rate of 30 Hz is required.

  13. Instrumentation optimization for positron emission mammography

    SciTech Connect

    Moses, William W.; Qi, Jinyi

    2003-06-05

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast.

  14. Targetless Camera Calibration

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Mussio, L.; Remondino, F.; Scaioni, M.

    2011-09-01

    In photogrammetry a camera is considered calibrated if its interior orientation parameters are known. These encompass the principal distance, the principal point position and some Additional Parameters used to model possible systematic errors. The current state of the art for automated camera calibration relies on the use of coded targets to accurately determine the image correspondences. This paper presents a new methodology for the efficient and rigorous photogrammetric calibration of digital cameras which does not require any longer the use of targets. A set of images depicting a scene with a good texture are sufficient for the extraction of natural corresponding image points. These are automatically matched with feature-based approaches and robust estimation techniques. The successive photogrammetric bundle adjustment retrieves the unknown camera parameters and their theoretical accuracies. Examples, considerations and comparisons with real data and different case studies are illustrated to show the potentialities of the proposed methodology.

  15. A robust method, based on a novel source, for performance and diagnostic capabilities assessment of the positron emission tomography system.

    PubMed

    Samartzis, Alexandros P; Fountos, George P; Kandarakis, Ioannis S; Kounadi, Evangelia P; Zoros, Emmanuel N; Skoura, Evangelia; Datseris, Ioannis E; Nikiforides, George H

    2014-01-01

    The aim of our work was to provide a robust method for evaluating imaging performance of positron emission tomography (PET) systems and particularly to estimate the modulation transfer function (MTF) using the line spread function (LSF) method. A novel plane source was prepared using thin layer chromatography (TLC) of a fluorine-18-fluorodeoxyglucose ((18)F-FDG) solution. The source was placed within a phantom, and imaged using the whole body (WB) two dimensional (2D) and three dimensional (3D) standard imaging protocols in a GE Discovery ST hybrid PET/CT scanner. Modulation transfer function was evaluated by determining the LSF, for various reconstruction methods and filters. The proposed MTF measurement method was validated against the conventional method, based on point spread function (PSF). Higher MTF values were obtained with 3D scanning protocol and 3D iterative reconstruction algorithm. All MTF obtained using 3D reconstruction algorithms showed better preservation of higher frequencies than the 2D algorithms. They also exhibited better contrast and resolution. MTF derived from LSF were more precise compared with those obtained from PSF since their reproducibility was better in all cases, providing a mean standard deviation of 0.0043, in contrary to the PSF method which gave 0.0405. In conclusion, the proposed method is novel and easy to implement for characterization of the signal transfer properties and image quality of PET/computed tomography (CT) systems. It provides an easy way to evaluate the frequency response of each kernel available. The proposed method requires cheap and easily accessible materials, available to the medical physicist in the nuclear medicine department. Furthermore, it is robust to aliasing and since this method is based on the LSF, is more resilient to noise due to greater data averaging than conventional PSF-integration techniques. PMID:25097895

  16. Nikon Camera

    NASA Technical Reports Server (NTRS)

    1980-01-01

    Nikon FM compact has simplification feature derived from cameras designed for easy, yet accurate use in a weightless environment. Innovation is a plastic-cushioned advance lever which advances the film and simultaneously switches on a built in light meter. With a turn of the lens aperture ring, a glowing signal in viewfinder confirms correct exposure.

  17. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  18. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  19. Assimilation of PFISR Data Using Support Vector Regression and Ground Based Camera Constraints

    NASA Astrophysics Data System (ADS)

    Clayton, R.; Lynch, K. A.; Nicolls, M. J.; Hampton, D. L.; Michell, R.; Samara, M.; Guinther, J.

    2013-12-01

    In order to best interpret the information gained from multipoint in situ measurements, a Support Vector Regression algorithm is being developed to interpret the data collected from the instruments in the context of ground observations (such as those from camera or radar array). The idea behind SVR is to construct the simplest function that models the data with the least squared error, subject to constraints given by the user. Constraints can be brought into the algorithm from other data sources or from models. As is often the case with data, a perfect solution to such a problem may be impossible, thus 'slack' may be introduced to control how closely the model adheres to the data. The algorithm employs kernels, and chooses radial basis functions as an appropriate kernel. The current SVR code can take input data as one to three dimensional scalars or vectors, and may also include time. External data can be incorporated and assimilated into a model of the environment. Regions of minimal and maximal values are allowed to relax to the sample average (or a user-supplied model) on size and time scales determined by user input, known as feature sizes. These feature sizes can vary for each degree of freedom if the user desires. The user may also select weights for each data point, if it is desirable to weight parts of the data differently. In order to test the algorithm, Poker Flat Incoherent Scatter Radar (PFISR) and MICA sounding rocket data are being used as sample data. The PFISR data consists of many beams, each with multiple ranges. In addition to analyzing the radar data as it stands, the algorithm is being used to simulate data from a localized ionospheric swarm of Cubesats using existing PFISR data. The sample points of the radar at one altitude slice can serve as surrogates for satellites in a cubeswarm. The number of beams of the PFISR radar can then be used to see what the algorithm would output for a swarm of similar size. By using PFISR data in the 15-beam to

  20. The new SCOS-based EGSE of the EPIC flight-spare on-ground cameras

    NASA Astrophysics Data System (ADS)

    La Palombara, Nicola; Abbey, Anthony; Insinga, Fernando; Calderon-Riano, Pedro; Casale, Mauro; Kirsch, Marcus; Martin, James; Munoz, Ramon; Palazzo, Maddalena; Poletti, Mauro; Sembay, Steve; Vallejo, Juan C.; Villa, Gabriele

    2014-07-01

    The XMM-Newton observatory, launched by the European Space Agency in 1999, is still one of the scientific community's most important high-energy astrophysics missions. After almost 15 years in orbit its instruments continue to operate smoothly with a performance close to the immediate post-launch status. The competition for the observing time remains very high with ESA reporting a very healthy over-subscription factor. Due to the efficient use of spacecraft consumables XMM-Newton could potentially be operated into the next decade. However, since the mission was originally planned for 10 years, progressive ageing and/or failures of the on-board instrumentation can be expected. Dealing with them could require substantial changes of the on-board operating software, and of the command and telemetry database, which could potentially have unforeseen consequences for the on-board equipment. In order to avoid this risk, it is essential to test these changes on ground, before their upload. To this aim, two flight-spare cameras of the EPIC experiment (one MOS and one PN) are available on-ground. Originally they were operated through an Electrical Ground Support Equipment (EGSE) system which was developed over 15 years ago to support the test campaigns up to the launch. The EGSE used a specialized command language running on now obsolete workstations. ESA and the EPIC Consortium, therefore, decided to replace it with new equipment in order to fully reproduce on-ground the on-board configuration and to operate the cameras with SCOS2000, the same Mission Control System used by ESA to control the spacecraft. This was a demanding task, since it required both the recovery of the detailed knowledge of the original EGSE and the adjustment of SCOS for this special use. Recently this work has been completed by replacing the EGSE of one of the two cameras, which is now ready to be used by ESA. Here we describe the scope and purpose of this activity, the problems faced during its

  1. Time-to-digital converter based on analog time expansion for 3D time-of-flight cameras

    NASA Astrophysics Data System (ADS)

    Tanveer, Muhammad; Nissinen, Ilkka; Nissinen, Jan; Kostamovaara, Juha; Borg, Johan; Johansson, Jonny

    2014-03-01

    This paper presents an architecture and achievable performance for a time-to-digital converter, for 3D time-of-flight cameras. This design is partitioned in two levels. In the first level, an analog time expansion, where the time interval to be measured is stretched by a factor k, is achieved by charging a capacitor with current I, followed by discharging the capacitor with a current I/k. In the second level, the final time to digital conversion is performed by a global gated ring oscillator based time-to-digital converter. The performance can be increased by exploiting its properties of intrinsic scrambling of quantization noise and mismatch error, and first order noise shaping. The stretched time interval is measured by counting full clock cycles and storing the states of nine phases of the gated ring oscillator. The frequency of the gated ring oscillator is approximately 131 MHz, and an appropriate stretch factor k, can give a resolution of ≍ 57 ps. The combined low nonlinearity of the time stretcher and the gated ring oscillator-based time-to-digital converter can achieve a distance resolution of a few centimeters with low power consumption and small area occupation. The carefully optimized circuit configuration achieved by using an edge aligner, the time amplification property and the gated ring oscillator-based time-to-digital converter may lead to a compact, low power single photon configuration for 3D time-of-flight cameras, aimed for a measurement range of 10 meters.

  2. Camera-based measurement for transverse vibrations of moving catenaries in mine hoists using digital image processing techniques

    NASA Astrophysics Data System (ADS)

    Yao, Jiannan; Xiao, Xingming; Liu, Yao

    2016-03-01

    This paper proposes a novel, non-contact, sensing method to measure the transverse vibrations of hoisting catenaries in mine hoists. Hoisting catenaries are typically moving cables and it is not feasible to use traditional methods to measure their transverse vibrations. In order to obtain the transverse displacements of an arbitrary point in a moving catenary, by superposing a mask image having the predefined reference line perpendicular to the hoisting catenaries on each frame of the processed image sequence, the dynamic intersecting points with a grey value of 0 in the image sequence could be identified. Subsequently, by traversing the coordinates of the pixel with a grey value of 0 and calculating the distance between the identified dynamic points from the reference, the transverse displacements of the selected arbitrary point in the hoisting catenary can be obtained. Furthermore, based on a theoretical model, the reasonability and applicability of the proposed camera-based method were confirmed. Additionally, a laboratory experiment was also carried out, which then validated the accuracy of the proposed method. The research results indicate that the proposed camera-based method is suitable for the measurement of the transverse vibrations of moving cables.

  3. Low-power 20-meter 3D ranging SPAD camera based on continuous-wave indirect time-of-flight

    NASA Astrophysics Data System (ADS)

    Bellisai, S.; Ferretti, L.; Villa, F.; Ruggeri, A.; Tisa, S.; Tosi, A.; Zappa, F.

    2012-06-01

    Three dimensional (3D) image acquisitions is the enabling technology of a great number of applications; culture heritage morphology study, industrial robotics, automotive active safety and security access control are example of applications. The most important feature is the high frame-rate, to detect very fast events within the acquired scenes. In order to reduce the computational complexity, Time-of-Flight algorithms for single sensor cameras are used. To achieve high-frame rate and high distance measurement accuracy it is important to collect the most part of the reflected light using sensor with very high sensitivity, allowing the implementation of a low-power light source. We designed and developed a single-photon detection based 3D ranging camera, capable to acquire distance image up to 22.5 m, with a resolution down to one centimeter. The light source used in this prototype employs 8 laser diodes sinusoidally modulated. The imager used in the application is based on Single-Photon Avalanche Diodes (SPADs) fabricated in a standard CMOS 0.35 μm technology. The sensor has 1024 pixels arranged in a 32x32 squared layout, with overall dimensions of 3.5mm x 3.5mm. The camera acquires 3D images through the continuous-wave indirect Time of Flight (cw-iTOF) technique. The typical frame-rate is 20 fps while the theoretical maximum frame-rate is 5 kfps. The precision is better than 5 cm within 22.5 m range, and can be effectively used in indoor applications, e.g. in industrial environment.

  4. Development and Application of Stereo Camera-Based Upper Extremity Workspace Evaluation in Patients with Neuromuscular Diseases

    PubMed Central

    Abresch, Richard T.; Nicorici, Alina; Yan, Posu; Bajcsy, Ruzena

    2012-01-01

    Background The concept of reachable workspace is closely tied to upper limb joint range of motion and functional capability. Currently, no practical and cost-effective methods are available in clinical and research settings to provide arm-function evaluation using an individual’s three-dimensional (3D) reachable workspace. A method to intuitively display and effectively analyze reachable workspace would not only complement traditional upper limb functional assessments, but also provide an innovative approach to quantify and monitor upper limb function. Methodology/Principal Findings A simple stereo camera-based reachable workspace acquisition system combined with customized 3D workspace analysis algorithm was developed and compared against a sub-millimeter motion capture system. The stereo camera-based system was robust, with minimal loss of data points, and with the average hand trajectory error of about 40 mm, which resulted to ∼5% error of the total arm distance. As a proof-of-concept, a pilot study was undertaken with healthy individuals (n = 20) and a select group of patients with various neuromuscular diseases and varying degrees of shoulder girdle weakness (n = 9). The workspace envelope surface areas generated from the 3D hand trajectory captured by the stereo camera were compared. Normalization of acquired reachable workspace surface areas to the surface area of the unit hemi-sphere allowed comparison between subjects. The healthy group’s relative surface areas were 0.618±0.09 and 0.552±0.092 (right and left), while the surface areas for the individuals with neuromuscular diseases ranged from 0.03 and 0.09 (the most severely affected individual) to 0.62 and 0.50 (very mildly affected individual). Neuromuscular patients with severe arm weakness demonstrated movement largely limited to the ipsilateral lower quadrant of their reachable workspace. Conclusions/Significance The findings indicate that the proposed stereo camera-based reachable

  5. Compton-edge-based energy calibration of double-sided silicon strip detectors in Compton camera

    NASA Astrophysics Data System (ADS)

    Seo, Hee; Park, Jin Hyung; Kim, Chan Hyeong; Lee, Ju Hahn; Lee, Chun Sik; Sung Lee, Jae

    2011-05-01

    Accurate energy calibration of double-sided silicon strip detectors (DSSDs) is very important, but challenging for high-energy photons. In the present study, the calibration was improved by considering the Compton edge additionally to the existing low-energy calibration points. The result, indeed, was very encouraging. The energy-calibration errors were dramatically reduced, from, on average, 15.5% and 16.9% to 0.47% and 0.31% for the 356 (133Ba) and 662 keV (137Cs) peaks, respectively. The imaging resolution of a double-scattering-type Compton camera using DSSDs as the scatterer detectors, for a 22Na point-like source, also was improved, by ˜9%.

  6. An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera

    NASA Astrophysics Data System (ADS)

    Kumar, K. S. Chidanand; Bhowmick, Brojeshwar

    A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.

  7. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  8. Camera-Based Online Signature Verification with Sequential Marginal Likelihood Change Detector

    NASA Astrophysics Data System (ADS)

    Muramatsu, Daigo; Yasuda, Kumiko; Shirato, Satoshi; Matsumoto, Takashi

    Several online signature verification systems that use cameras have been proposed. These systems obtain online signature data from video images by tracking the pen tip. Such systems are very useful because special devices such as pen-operated digital tablets are not necessary. One drawback, however, is that if the captured images are blurred, pen tip tracking may fail, which causes performance degradation. To solve this problem, here we propose a scheme to detect such images and re-estimate the pen tip position associated with the blurred images. Our pen tracking algorithm is implemented by using the sequential Monte Carlo method, and a sequential marginal likelihood is used for blurred image detection. Preliminary experiments were performed using private data consisting of 390 genuine signatures and 1560 forged signatures. The experimental results show that the proposed algorithm improved performance in terms of verification accuracy.

  9. Performance of the Tachyon Time-of-Flight PET Camera

    PubMed Central

    Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057

  10. Performance of the Tachyon Time-of-Flight PET Camera

    DOE PAGESBeta

    Peng, Q.; Choong, W. -S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMAmore » NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less

  11. Performance of the Tachyon Time-of-Flight PET Camera

    SciTech Connect

    Peng, Q.; Choong, W. -S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  12. A Peptide-Based Positron Emission Tomography Probe for In Vivo Detection of Caspase Activity in Apoptotic Cells

    PubMed Central

    Hight, Matthew R.; Cheung, Yiu-Yin; Nickels, Michael L.; Dawson, Eric S.; Zhao, Ping; Saleh, Samir; Buck, Jason R.; Tang, Dewei; Washington, M. Kay; Coffey, Robert J.; Manning, H. Charles

    2014-01-01

    Purpose Apoptosis, or programmed cell death, can be leveraged as a surrogate measure of response to therapeutic interventions in medicine. Cysteine aspartic acid-specific proteases, or caspases, are essential determinants of apoptosis signaling cascades and represent promising targets for molecular imaging. Here, we report development and in vivo validation of [18F]4-fluorobenzylcarbonyl-Val-Ala-Asp(OMe)-fluoromethylketone ([18F]FB-VAD-FMK), a novel peptide-based molecular probe suitable for quantification of caspase activity in vivo using positron emission tomography (PET). Experimental Design Supported by molecular modeling studies and subsequent in vitro assays suggesting probe feasibility the labeled pan-caspase inhibitory peptide, [18F]FB-VAD-FMK, was produced in high radiochemical yield and purity using a simple two-step, radiofluorination. The biodistribution of [18F]FB-VAD-FMK in normal tissue and its efficacy to predict response to molecularly targeted therapy in tumors was evaluated using microPET imaging of mouse models of human colorectal cancer (CRC). Results Accumulation of [18F]FB-VAD-FMK was found to agree with elevated caspase-3 activity in response to Aurora B kinase inhibition as well as a multi-drug regimen that combined an inhibitor of mutant BRAF and a dual PI3K/mTOR inhibitor in V600EBRAF colon cancer. In the latter setting, [18F]FB-VAD-FMK PET was also elevated in the tumors of cohorts that exhibited reduction in size. Conclusions These studies illuminate [18F]FB-VAD-FMK as a promising PET imaging probe to detect apoptosis in tumors and as a novel, potentially translatable biomarker for predicting response to personalized medicine. PMID:24573549

  13. Beyond leaf color: Comparing camera-based phenological metrics with leaf biochemical, biophysical, and spectral properties throughout the growing season of a temperate deciduous forest

    NASA Astrophysics Data System (ADS)

    Yang, Xi; Tang, Jianwu; Mustard, John F.

    2014-03-01

    Plant phenology, a sensitive indicator of climate change, influences vegetation-atmosphere interactions by changing the carbon and water cycles from local to global scales. Camera-based phenological observations of the color changes of the vegetation canopy throughout the growing season have become popular in recent years. However, the linkages between camera phenological metrics and leaf biochemical, biophysical, and spectral properties are elusive. We measured key leaf properties including chlorophyll concentration and leaf reflectance on a weekly basis from June to November 2011 in a white oak forest on the island of Martha's Vineyard, Massachusetts, USA. Concurrently, we used a digital camera to automatically acquire daily pictures of the tree canopies. We found that there was a mismatch between the camera-based phenological metric for the canopy greenness (green chromatic coordinate, gcc) and the total chlorophyll and carotenoids concentration and leaf mass per area during late spring/early summer. The seasonal peak of gcc is approximately 20 days earlier than the peak of the total chlorophyll concentration. During the fall, both canopy and leaf redness were significantly correlated with the vegetation index for anthocyanin concentration, opening a new window to quantify vegetation senescence remotely. Satellite- and camera-based vegetation indices agreed well, suggesting that camera-based observations can be used as the ground validation for satellites. Using the high-temporal resolution dataset of leaf biochemical, biophysical, and spectral properties, our results show the strengths and potential uncertainties to use canopy color as the proxy of ecosystem functioning.

  14. Preliminary considerations of an intense slow positron facility based on a sup 78 Kr loop in the high flux isotopes reactor

    SciTech Connect

    Hulett, L.D. Jr.; Donohue, D.L.; Peretz, F.J.; Montgomery, B.H.; Hayter, J.B.

    1990-01-01

    Suggestions have been made to the National Steering Committee for the Advanced Neutron Source (ANS) by Mills that provisions be made to install a high intensity slow positron facility, based on a {sup 78}Kr loop, that would be available to the general community of scientists interested in this field. The flux of thermal neutrons calculated for the ANS is E + 15 sec{sup {minus}1} m{sup {minus}2}, which Mills has estimated will produce 5 mm beam of slow positrons having a current of about 1 E + 12 sec {sup {minus}1}. The intensity of such a beam will be a least 3 orders of magnitude greater than those presently available. The construction of the ANS is not anticipated to be complete until the year 2000. In order to properly plan the design of the ANS, strong considerations are being given to a proof-of-principle experiment, using the presently available High Flux Isotopes Reactor, to test the {sup 78}Kr loop technique. The positron current from the HFIR facility is expected to be about 1 E + 10 sec{sup {minus}1}, which is 2 orders of magnitude greater than any other available. If the experiment succeeds, a very valuable facility will be established, and important formation will be generated on how the ANS should be designed. 3 refs., 1 fig.

  15. Positron-alkali atom scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.; Ward, S. J.

    1990-01-01

    Positron-alkali atom scattering was recently investigated both theoretically and experimentally in the energy range from a few eV up to 100 eV. On the theoretical side calculations of the integrated elastic and excitation cross sections as well as total cross sections for Li, Na and K were based upon either the close-coupling method or the modified Glauber approximation. These theoretical results are in good agreement with experimental measurements of the total cross section for both Na and K. Resonance structures were also found in the L = 0, 1 and 2 partial waves for positron scattering from the alkalis. The structure of these resonances appears to be quite complex and, as expected, they occur in conjunction with the atomic excitation thresholds. Currently both theoretical and experimental work is in progress on positron-Rb scattering in the same energy range.

  16. Instrumentation in positron emission tomography

    SciTech Connect

    Not Available

    1988-03-11

    Positron emission tomography (PET) is a three-dimensional medical imaging technique that noninvasively measures the concentration of radiopharmaceuticals in the body that are labeled with positron emitters. With the proper compounds, PET can be used to measure metabolism, blood flow, or other physiological values in vivo. The technique is based on the physics of positron annihilation and detection and the mathematical formulations developed for x-ray computed tomography. Modern PET systems can provide three-dimensional images of the brain, the heart, and other internal organs with resolutions on the order of 4 to 6 mm. With the selectivity provided by a choice of injected compounds, PET has the power to provide unique diagnostic information that is not available with any other imaging modality. This is the first five reports on the nature and uses of PET that have been prepared for the American Medical Association's Council on Scientific Affairs by an authoritative panel.

  17. Positron spectroscopy for materials characterization

    SciTech Connect

    Schultz, P.J.; Snead, C.L. Jr.

    1988-01-01

    One of the more active areas of research on materials involves the observation and characterization of defects. The discovery of positron localization in vacancy-type defects in solids in the 1960's initiated a vast number of experimental and theoretical investigations which continue to this day. Traditional positron annihilation spectroscopic techniques, including lifetime studies, angular correlation, and Doppler broadening of annihilation radiation, are still being applied to new problems in the bulk properties of simple metals and their alloys. In addition new techniques based on tunable sources of monoenergetic positron beams have, in the last 5 years, expanded the horizons to studies of surfaces, thin films, and interfaces. In the present paper we briefly review these experimental techniques, illustrating with some of the important accomplishments of the field. 40 refs., 19 figs.

  18. Phantom experiments on a PSAPD-based compact gamma camera with submillimeter spatial resolution for small animal SPECT

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2010-01-01

    We demonstrate a position sensitive avalanche photodiode (PSAPD) based compact gamma camera for the application of small animal single photon emission computed tomography (SPECT). The silicon PSAPD with a two-dimensional resistive layer and four readout channels is implemented as a gamma ray detector to record the energy and position of radiation events from a radionuclide source. A 2 mm thick monolithic CsI:Tl scintillator is optically coupled to a PSAPD with a 8mm×8mm active area, providing submillimeter intrinsic spatial resolution, high energy resolution (16% full-width half maximum at 140 keV) and high gain. A mouse heart phantom filled with an aqueous solution of 370 MBq 99mTc-pertechnetate (140 keV) was imaged using the PSAPD detector module and a tungsten knife-edge pinhole collimator with a 0.5 mm diameter aperture. The PSAPD detector module was cooled with cold nitrogen gas to suppress dark current shot noise. For each projection image of the mouse heart phantom, a rotated diagonal readout algorithm was used to calculate the position of radiation events and correct for pincushion distortion. The reconstructed image of the mouse heart phantom demonstrated reproducible image quality with submillimeter spatial resolution (0.7 mm), showing the feasibility of using the compact PSAPD-based gamma camera for a small animal SPECT system. PMID:21278833

  19. Quantitative Fluorescence Assays Using a Self-Powered Paper-Based Microfluidic Device and a Camera-Equipped Cellular Phone.

    PubMed

    Thom, Nicole K; Lewis, Gregory G; Yeung, Kimy; Phillips, Scott T

    2014-01-01

    Fluorescence assays often require specialized equipment and, therefore, are not easily implemented in resource-limited environments. Herein we describe a point-of-care assay strategy in which fluorescence in the visible region is used as a readout, while a camera-equipped cellular phone is used to capture the fluorescent response and quantify the assay. The fluorescence assay is made possible using a paper-based microfluidic device that contains an internal fluidic battery, a surface-mount LED, a 2-mm section of a clear straw as a cuvette, and an appropriately-designed small molecule reagent that transforms from weakly fluorescent to highly fluorescent when exposed to a specific enzyme biomarker. The resulting visible fluorescence is digitized by photographing the assay region using a camera-equipped cellular phone. The digital images are then quantified using image processing software to provide sensitive as well as quantitative results. In a model 30 min assay, the enzyme β-D-galactosidase was measured quantitatively down to 700 pM levels. This Communication describes the design of these types of assays in paper-based microfluidic devices and characterizes the key parameters that affect the sensitivity and reproducibility of the technique. PMID:24490035

  20. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    NASA Astrophysics Data System (ADS)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  1. Evaluation of a dual-panel PET camera design to breast cancer imaging.

    PubMed

    Zhang, Jin; Chinn, Gary; Foudray, Angela M K; Habte, Frezghi; Olcott, Peter; Levin, Craig S

    2006-01-01

    We are developing a novel, portable dual-panel positron emission tomography (PET) camera dedicated to breast cancer imaging. With a sensitive area of approximately 150 cm(2), this camera is based on arrays of lutetium oxyorthosilicate (LSO) crystals (1x1x3 mm(3)) coupled to 11x11-mm(2) position-sensitive avalanche photodiodes (PSAPD). GATE open source software was used to perform Monte Carlo simulations to optimize the parameters for the camera design. The noise equivalent counting (NEC) rate, together with the true, scatter, and random counting rates were simulated at different time and energy windows. Focal plane tomography (FPT) was used for visualizing the tumors at different depths between the two detector panels. Attenuation and uniformity corrections were applied to images. PMID:17646005

  2. Target Volume Delineation in Dynamic Positron Emission Tomography Based on Time Activity Curve Differences

    NASA Astrophysics Data System (ADS)

    Teymurazyan, Artur

    Tumor volume delineation plays a critical role in radiation treatment planning and simulation, since inaccurately defined treatment volumes may lead to the overdosing of normal surrounding structures and potentially missing the cancerous tissue. However, the imaging modality almost exclusively used to determine tumor volumes, X-ray Computed Tomography (CT), does not readily exhibit a distinction between cancerous and normal tissue. It has been shown that CT data augmented with PET can improve radiation treatment plans by providing functional information not available otherwise. Presently, static PET scans account for the majority of procedures performed in clinical practice. In the radiation therapy (RT) setting, these scans are visually inspected by a radiation oncologist for the purpose of tumor volume delineation. This approach, however, often results in significant interobserver variability when comparing contours drawn by different experts on the same PET/CT data sets. For this reason, a search for more objective contouring approaches is underway. The major drawback of conventional tumor delineation in static PET images is the fact that two neighboring voxels of the same intensity can exhibit markedly different overall dynamics. Therefore, equal intensity voxels in a static analysis of a PET image may be falsely classified as belonging to the same tissue. Dynamic PET allows the evaluation of image data in the temporal domain, which often describes specific biochemical properties of the imaged tissues. Analysis of dynamic PET data can be used to improve classification of the imaged volume into cancerous and normal tissue. In this thesis we present a novel tumor volume delineation approach (Single Seed Region Growing algorithm in 4D (dynamic) PET or SSRG/4D-PET) in dynamic PET based on TAC (Time Activity Curve) differences. A partially-supervised approach is pursued in order to allow an expert reader to utilize the information available from other imaging

  3. Imaging performance comparison between a LaBr{sub 3}:Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera

    SciTech Connect

    Russo, P.; Mettivier, G.; Pani, R.; Pellegrini, R.; Cinti, M. N.; Bennati, P.

    2009-04-15

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr{sub 3}:Ce scintillator continuous crystal (49x49x5 mm{sup 3}) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14x14x1 mm{sup 3}) with 256x256 square pixels and a pitch of 55 {mu}m, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 {mu}m, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported.

  4. A smart camera based traffic enforcement system: experiences from the field

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2013-03-01

    The observation and monitoring of traffic with smart vision systems for the purpose of improving traffic safety has a big potential. Embedded vision systems can count vehicles and estimate the state of traffic along the road, they can supplement or replace loop sensors with their limited local scope, radar which measures the speed, presence and number of vehicles. This work presents a vision system which has been built to detect and report traffic rule violations at unsecured railway crossings which pose a threat to drivers day and night. Our system is designed to detect and record vehicles passing over the railway crossing after the red light has been activated. Sparse optical flow in conjunction with motion clustering is used for real-time motion detection in order to capture these safety critical events. The cameras are activated by an electrical signal from the railway when the red light turns on. If they detect a vehicle moving over the stopping line, and it is well over this limit, an image sequence will be recorded and stored onboard for later evaluation. The system has been designed to be operational in all weather conditions, delivering human-readable license plate images even under the worst illumination conditions like direct incident sunlight direct view into or vehicle headlights. After several months of operation in the field we can report on the performance of the system, its hardware implementation as well as the implementation of algorithms which have proven to be usable in this real-world application.

  5. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  6. Lightweight camera head for robotic-based binocular stereo vision: an integrated engineering approach

    NASA Astrophysics Data System (ADS)

    Pretlove, John R. G.; Parker, Graham A.

    1992-03-01

    This paper presents the design and development of a real-time eye-in-hand stereo-vision system to aid robot guidance in a manufacturing environment. The stereo vision head comprises a novel camera arrangement with servo-vergence, focus, and aperture that continuously provides high-quality images to a dedicated image processing system and parallel processing array. The stereo head has four degrees of freedom but it relies on the robot end- effector for all remaining movement. This provides the robot with exploratory sensing abilities allowing it to undertake a wider variety of less constrained tasks. Unlike other stereo vision research heads, the overriding factor in the Surrey head has been a truly integrated engineering approach in an attempt to solve an extremely complex problem. The head is low cost, low weight, employs state-of-the-art motor technology, is highly controllable and occupies a small size envelope. Its intended applications include high-accuracy metrology, 3-D path following, object recognition and tracking, parts manipulation, and component inspection for the manufacturing industry.

  7. A Trajectory and Orientation Reconstruction Method for Moving Objects Based on a Moving Monocular Camera

    PubMed Central

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-01-01

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053

  8. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  9. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens).

    PubMed

    Robert, Charlotte; Montémont, Guillaume; Rebuffel, Véronique; Buvat, Irène; Guérin, Lucie; Verger, Loïck

    2010-05-01

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging. PMID:20400808

  10. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future. PMID:20210459

  11. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  12. Simulation-based evaluation and optimization of a new CdZnTe gamma-camera architecture (HiSens)

    NASA Astrophysics Data System (ADS)

    Robert, Charlotte; Montémont, Guillaume; Rebuffel, Véronique; Buvat, Irène; Guérin, Lucie; Verger, Loïck

    2010-05-01

    A new gamma-camera architecture named HiSens is presented and evaluated. It consists of a parallel hole collimator, a pixelated CdZnTe (CZT) detector associated with specific electronics for 3D localization and dedicated reconstruction algorithms. To gain in efficiency, a high aperture collimator is used. The spatial resolution is preserved thanks to accurate 3D localization of the interactions inside the detector based on a fine sampling of the CZT detector and on the depth of interaction information. The performance of this architecture is characterized using Monte Carlo simulations in both planar and tomographic modes. Detective quantum efficiency (DQE) computations are then used to optimize the collimator aperture. In planar mode, the simulations show that the fine CZT detector pixelization increases the system sensitivity by 2 compared to a standard Anger camera without loss in spatial resolution. These results are then validated against experimental data. In SPECT, Monte Carlo simulations confirm the merits of the HiSens architecture observed in planar imaging.

  13. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    NASA Astrophysics Data System (ADS)

    Tohme, Michel S.; Qi, Jinyi

    2009-06-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 × 3 line phantom, an ultra-micro resolution phantom and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  14. Iterative Image Reconstruction for Positron Emission Tomography Based on Detector Response Function Estimated from Point Source Measurements

    PubMed Central

    Tohme, Michel S.; Qi, Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can be easily applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2-D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3-by-3 line phantom, an ultra-micro resolution phantom, and a 22Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  15. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  16. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  17. Nuclear probes and intraoperative gamma cameras.

    PubMed

    Heller, Sherman; Zanzonico, Pat

    2011-05-01

    Gamma probes are now an important, well-established technology in the management of cancer, particularly in the detection of sentinel lymph nodes. Intraoperative sentinel lymph node as well as tumor detection may be improved under some circumstances by the use of beta (negatron or positron), rather than gamma detection, because the very short range (∼ 1 mm or less) of such particulate radiations eliminates the contribution of confounding counts from activity other than in the immediate vicinity of the detector. This has led to the development of intraoperative beta probes. Gamma camera imaging also benefits from short source-to-detector distances and minimal overlying tissue, and intraoperative small field-of-view gamma cameras have therefore been developed as well. Radiation detectors for intraoperative probes can generally be characterized as either scintillation or ionization detectors. Scintillators used in scintillation-detector probes include thallium-doped sodium iodide, thallium- and sodium-doped cesium iodide, and cerium-doped lutecium orthooxysilicate. Alternatives to inorganic scintillators are plastic scintillators, solutions of organic scintillation compounds dissolved in an organic solvent that is subsequently polymerized to form a solid. Their combined high counting efficiency for beta particles and low counting efficiency for 511-keV annihilation γ-rays make plastic scintillators well-suited as intraoperative beta probes in general and positron probes in particular Semiconductors used in ionization-detector probes include cadmium telluride, cadmium zinc telluride, and mercuric iodide. Clinical studies directly comparing scintillation and semiconductor intraoperative probes have not provided a clear choice between scintillation and ionization detector-based probes. The earliest small field-of-view intraoperative gamma camera systems were hand-held devices having fields of view of only 1.5-2.5 cm in diameter that used conventional thallium

  18. Replacing 16-mm film cameras with high-definition digital cameras

    NASA Astrophysics Data System (ADS)

    Balch, Kris S.

    1995-09-01

    For many years 16 mm film cameras have been used in severe environments. These film cameras are used on Hy-G automotive sleds, airborne gun cameras, range tracking and other hazardous environments. The companies and government agencies using these cameras are in need of replacing them with a more cost effective solution. Film-based cameras still produce the best resolving capability, however, film development time, chemical disposal, recurring media cost, and faster digital analysis are factors influencing the desire for a 16 mm film camera replacement. This paper will describe a new camera from Kodak that has been designed to replace 16 mm high speed film cameras.

  19. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  20. Geologic map of the northern hemisphere of Vesta based on Dawn Framing Camera (FC) images

    NASA Astrophysics Data System (ADS)

    Ruesch, Ottaviano; Hiesinger, Harald; Blewett, David T.; Williams, David A.; Buczkowski, Debra; Scully, Jennifer; Yingst, R. Aileen; Roatsch, Thomas; Preusker, Frank; Jaumann, Ralf; Russell, Christopher T.; Raymond, Carol A.

    2014-12-01

    The Dawn Framing Camera (FC) has imaged the northern hemisphere of the Asteroid (4) Vesta at high spatial resolution and coverage. This study represents the first investigation of the overall geology of the northern hemisphere (22-90°N, quadrangles Av-1, 2, 3, 4 and 5) using these unique Dawn mission observations. We have compiled a morphologic map and performed crater size-frequency distribution (CSFD) measurements to date the geologic units. The hemisphere is characterized by a heavily cratered surface with a few highly subdued basins up to ∼200 km in diameter. The most widespread unit is a plateau (cratered highland unit), similar to, although of lower elevation than the equatorial Vestalia Terra plateau. Large-scale troughs and ridges have regionally affected the surface. Between ∼180°E and ∼270°E, these tectonic features are well developed and related to the south pole Veneneia impact (Saturnalia Fossae trough unit), elsewhere on the hemisphere they are rare and subdued (Saturnalia Fossae cratered unit). In these pre-Rheasilvia units we observed an unexpectedly high frequency of impact craters up to ∼10 km in diameter, whose formation could in part be related to the Rheasilvia basin-forming event. The Rheasilvia impact has potentially affected the northern hemisphere also with S-N small-scale lineations, but without covering it with an ejecta blanket. Post-Rheasilvia impact craters are small (<60 km in diameter) and show a wide range of degradation states due to impact gardening and mass wasting processes. Where fresh, they display an ejecta blanket, bright rays and slope movements on walls. In places, crater rims have dark material ejecta and some crater floors are covered by ponded material interpreted as impact melt.

  1. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes.

    PubMed

    Kory Westlund, Jacqueline; Westlund, Jacqueline Kory; D'Mello, Sidney K; Olney, Andrew M

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker's estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker's movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker's estimates were correlated with the movement of a person's head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  2. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

    PubMed Central

    Westlund, Jacqueline Kory; D’Mello, Sidney K.; Olney, Andrew M.

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  3. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    PubMed Central

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  4. Positron-rubidium scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.

    1990-01-01

    A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.

  5. Self-Localization of a Multi-Fisheye Camera Based Augmented Reality System in Textureless 3d Building Models

    NASA Astrophysics Data System (ADS)

    Urban, S.; Leitloff, J.; Wursthorn, S.; Hinz, S.

    2013-10-01

    Georeferenced images help planners to compare and document the progress of underground construction sites. As underground positioning can not rely on GPS/GNSS, we introduce a solely vision based localization method, that makes use of a textureless 3D CAD model of the construction site. In our analysis-by-synthesis approach, depth and normal fisheye images are rendered from presampled positions and gradient orientations are extracted to build a high dimensional synthetic feature space. Acquired camera images are then matched to those features by using a robust distance metric and fast nearest neighbor search. In this manner, initial poses can be obtained on a laptop in real-time using concurrent processing and the graphics processing unit.

  6. The CAMCAO infrared camera

    NASA Astrophysics Data System (ADS)

    Amorim, Antonio; Melo, Antonio; Alves, Joao; Rebordao, Jose; Pinhao, Jose; Bonfait, Gregoire; Lima, Jorge; Barros, Rui; Fernandes, Rui; Catarino, Isabel; Carvalho, Marta; Marques, Rui; Poncet, Jean-Marc; Duarte Santos, Filipe; Finger, Gert; Hubin, Norbert; Huster, Gotthard; Koch, Franz; Lizon, Jean-Louis; Marchetti, Enrico

    2004-09-01

    The CAMCAO instrument is a high resolution near infrared (NIR) camera conceived to operate together with the new ESO Multi-conjugate Adaptive optics Demonstrator (MAD) with the goal of evaluating the feasibility of Multi-Conjugate Adaptive Optics techniques (MCAO) on the sky. It is a high-resolution wide field of view (FoV) camera that is optimized to use the extended correction of the atmospheric turbulence provided by MCAO. While the first purpose of this camera is the sky observation, in the MAD setup, to validate the MCAO technology, in a second phase, the CAMCAO camera is planned to attach directly to the VLT for scientific astrophysical studies. The camera is based on the 2kx2k HAWAII2 infrared detector controlled by an ESO external IRACE system and includes standard IR band filters mounted on a positional filter wheel. The CAMCAO design requires that the optical components and the IR detector should be kept at low temperatures in order to avoid emitting radiation and lower detector noise in the region analysis. The cryogenic system inclues a LN2 tank and a sptially developed pulse tube cryocooler. Field and pupil cold stops are implemented to reduce the infrared background and the stray-light. The CAMCAO optics provide diffraction limited performance down to J Band, but the detector sampling fulfills the Nyquist criterion for the K band (2.2mm).

  7. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  8. Biologically optimized 3-dimensional in vivo predictive assay-based radiation therapy using positron emission tomography-computerized tomography imaging.

    PubMed

    Brahme, Anders

    2003-01-01

    PET-CT is probably the ultimate tool for accurate tumor imaging and 3-dimensional in vivo predictive assay of radiation sensitivity. By imaging the tumor twice during the early course of therapy, it should be possible to quantify both the tumor responsiveness to therapy and the rate of loss of functional tumor cells using the presently derived equations. This new information is ideal for use together with biologically based therapy optimization and makes it possible accurately to quantitate the dose-response relation, at least for the bulk of the tumor cells. Since the tumor responsiveness is available after about one and a half weeks of therapy, the information is also ideal for use with adaptive therapy where all forms of deviations from the original treatment plan can be accurately corrected for since they generally influence the still functional, but mainly doomed tumor cell compartment. Thus, uncertainties such as: 1) the geometric misalignment of the therapeutic beam with the tumor, 2) deviations of the delivered dose distribution from the planned delivery whether due to 3) an erroneous treatment planning algorithm or 4) treatment equipment uncertainties and 5) deviations in the anticipated responsiveness of the tumor of the patient based on historical response data, can all be taken into account. Fortunately, when a larger tumor cell compartment than expected is seen an increased dose during the remainder of the treatment should always be delivered independently on whichever combination of the above deviations was the true reason. With high-energy photon and hadron therapy it is even possible to image the integral dose delivery in vivo during or after a treatment using PET-CT imaging. The high-energy photons above about 20 MeV produce positron emitters through photonuclear reactions in tissue which are proportional to the photon fluence and thus approximately also to the absorbed dose. Light ion beams, the ultimate radiation modality with regard to physical

  9. Caught on Video! Using Handheld Digital Video Cameras to Support Evidence-Based Reasoning

    ERIC Educational Resources Information Center

    Lottero-Perdue, Pamela S.; Nealy, Jennifer; Roland, Christine; Ryan, Amy

    2011-01-01

    Engaging elementary students in evidence-based reasoning is an essential aspect of science and engineering education. Evidence-based reasoning involves students making claims (i.e., answers to questions, or solutions to problems), providing evidence to support those claims, and articulating their reasoning to connect the evidence to the claim. In…

  10. CHARACTERIZATION OF PLASTICALLY-INDUCED STRUCTURAL CHANGES IN A Zr-BASED BULK METALLIC GLASS USING POSITRON ANNIHILATION SPECTROCOPY

    SciTech Connect

    Flores, K M; Kanungo, B P; Glade, S C; Asoka-Kumar, P

    2005-09-16

    Flow in metallic glasses is associated with stress-induced cooperative rearrangements of small groups of atoms involving the surrounding free volume. Understanding the details of these rearrangements therefore requires knowledge of the amount and distribution of the free volume and how that distribution evolves with deformation. The present study employs positron annihilation spectroscopy to investigate the free volume change in Zr{sub 58.5}Cu{sub 15.6}Ni{sub 12.8}Al{sub 10.3}Nb{sub 2.8} bulk metallic glass after inhomogeneous plastic deformation by cold rolling and structural relaxation by annealing. Results indicate that the size distribution of open volume sites is at least bimodal. The size and concentration of the larger group, identified as flow defects, changes with processing. Following initial plastic deformation the size of the flow defects increases, consistent with the free volume theory for flow. Following more extensive deformation, however, the size distribution of the positron traps shifts, with much larger open volume sites forming at the expense of the flow defects. This suggests that a critical strain is required for flow defects to coalesce and form more stable nanovoids, which have been observed elsewhere by high resolution TEM. Although these results suggest the presence of three distinct open volume size groups, further analysis indicates that all groups have the same line shape parameter. This is in contrast to the distinctly different interactions observed in crystalline materials with multiple defect types. This similarity may be due to the disordered structure of the glass and positron affinity to particular atoms surrounding open-volume regions.

  11. A residual correction method for high-resolution PET reconstruction with application to on-the-fly Monte Carlo based model of positron range

    PubMed Central

    Fu, Lin; Qi, Jinyi

    2010-01-01

    Purpose: The quality of tomographic images is directly affected by the system model being used in image reconstruction. An accurate system matrix is desirable for high-resolution image reconstruction, but it often leads to high computation cost. In this work the authors present a maximum a posteriori reconstruction algorithm with residual correction to alleviate the tradeoff between the model accuracy and the computation efficiency in image reconstruction. Methods: Unlike conventional iterative methods that assume that the system matrix is accurate, the proposed method reconstructs an image with a simplified system matrix and then removes the reconstruction artifacts through residual correction. Since the time-consuming forward and back projection operations using the accurate system matrix are not required in every iteration, image reconstruction time can be greatly reduced. Results: The authors apply the new algorithm to high-resolution positron emission tomography reconstruction with an on-the-fly Monte Carlo (MC) based positron range model. Computer simulations show that the new method is an order of magnitude faster than the traditional MC-based method, whereas the visual quality and quantitative accuracy of the reconstructed images are much better than that obtained by using the simplified system matrix alone. Conclusions: The residual correction method can reconstruct high-resolution images and is computationally efficient. PMID:20229880

  12. Individualized Positron Emission Tomography–Based Isotoxic Accelerated Radiation Therapy Is Cost-Effective Compared With Conventional Radiation Therapy: A Model-Based Evaluation

    SciTech Connect

    Bongers, Mathilda L.; Coupé, Veerle M.H.; De Ruysscher, Dirk; Oberije, Cary; Lambin, Philippe; Uyl-de Groot, Cornelia A.

    2015-03-15

    Purpose: To evaluate long-term health effects, costs, and cost-effectiveness of positron emission tomography (PET)-based isotoxic accelerated radiation therapy treatment (PET-ART) compared with conventional fixed-dose CT-based radiation therapy treatment (CRT) in non-small cell lung cancer (NSCLC). Methods and Materials: Our analysis uses a validated decision model, based on data of 200 NSCLC patients with inoperable stage I-IIIB. Clinical outcomes, resource use, costs, and utilities were obtained from the Maastro Clinic and the literature. Primary model outcomes were the difference in life-years (LYs), quality-adjusted life-years (QALYs), costs, and the incremental cost-effectiveness and cost/utility ratio (ICER and ICUR) of PET-ART versus CRT. Model outcomes were obtained from averaging the predictions for 50,000 simulated patients. A probabilistic sensitivity analysis and scenario analyses were carried out. Results: The average incremental costs per patient of PET-ART were €569 (95% confidence interval [CI] €−5327-€6936) for 0.42 incremental LYs (95% CI 0.19-0.61) and 0.33 QALYs gained (95% CI 0.13-0.49). The base-case scenario resulted in an ICER of €1360 per LY gained and an ICUR of €1744 per QALY gained. The probabilistic analysis gave a 36% probability that PET-ART improves health outcomes at reduced costs and a 64% probability that PET-ART is more effective at slightly higher costs. Conclusion: On the basis of the available data, individualized PET-ART for NSCLC seems to be cost-effective compared with CRT.

  13. Positrons for linear colliders

    SciTech Connect

    Ecklund, S.

    1987-11-01

    The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)

  14. Texas Intense Positron Source (TIPS)

    NASA Astrophysics Data System (ADS)

    O'Kelly, D.

    2003-03-01

    The Texas Intense Positron Source (TIPS) is a state of the art variable energy positron beam under construction at the Nuclear Engineering Teaching Laboratory (NETL). Projected intensities on the order of the order of 10^7 e+/second using ^64Cu as the positron source are expected. Owing to is short half-life (t1/2 12.8 hrs), plans are to produce the ^64Cu isotope on-site using beam port 1 of NETL TRIGA Mark II reactor. Following tungsten moderation, the positrons will be electrostatically focused and accelerated from few 10's of eV up to 30 keV. This intensity and energy range should allow routine performance of several analytical techniques of interest to surface scientists (PALS, PADB and perhaps PAES and LEPD.) The TIPS project is being developed in parallel phases. Phase I of the project entails construction of the vacuum system, source chamber, main beam line, electrostatic/magnetic focusing and transport system as well as moderator design. Initial construction, testing and characterization of moderator and beam transport elements are underway and will use a commercially available 10 mCi ^22Na radioisotope as a source of positrons. Phase II of the project is concerned primarily with the Cu source geometry and thermal properties as well as production and physical handling of the radioisotope. Additional instrument optimizing based upon experience gained during Phase I will be incorporated in the final design. Current progress of both phases will be presented along with motivations and future directions.

  15. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy

    NASA Astrophysics Data System (ADS)

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-01

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  16. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy.

    PubMed

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E; Maguire, Gerald Q; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-21

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  17. Interferometer-based structured-illumination microscopy utilizing complementary phase relationship through constructive and destructive image detection by two cameras.

    PubMed

    Shao, L; Winoto, L; Agard, D A; Gustafsson, M G L; Sedat, J W

    2012-06-01

    In an interferometer-based fluorescence microscope, a beam splitter is often used to combine two emission wavefronts interferometrically. There are two perpendicular paths along which the interference fringes can propagate and normally only one is used for imaging. However, the other path also contains useful information. Here we introduced a second camera to our interferometer-based three-dimensional structured-illumination microscope (I(5)S) to capture the fringes along the normally unused path, which are out of phase by π relative to the fringes along the other path. Based on this complementary phase relationship and the well-defined phase interrelationships among the I(5)S data components, we can deduce and then computationally eliminate the path length errors within the interferometer loop using the simultaneously recorded fringes along the two imaging paths. This self-correction capability can greatly relax the requirement for eliminating the path length differences before and maintaining that status during each imaging session, which are practically challenging tasks. Experimental data is shown to support the theory. PMID:22472010

  18. Unsupervised Spectral-Spatial Feature Selection-Based Camouflaged Object Detection Using VNIR Hyperspectral Camera

    PubMed Central

    2015-01-01

    The detection of camouflaged objects is important for industrial inspection, medical diagnoses, and military applications. Conventional supervised learning methods for hyperspectral images can be a feasible solution. Such approaches, however, require a priori information of a camouflaged object and background. This letter proposes a fully autonomous feature selection and camouflaged object detection method based on the online analysis of spectral and spatial features. The statistical distance metric can generate candidate feature bands and further analysis of the entropy-based spatial grouping property can trim the useless feature bands. Camouflaged objects can be detected better with less computational complexity by optical spectral-spatial feature analysis. PMID:25879073

  19. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  20. Infrared imaging spectroscopic system based on a PGP spectrograph and a monochrome infrared camera

    NASA Astrophysics Data System (ADS)

    Garcia-Allende, Pilar Beatriz; Anabitarte, Francisco; Conde, Olga M.; Madruga, Francisco J.; Lomer, Mauro; Lopez-Higuera, Jose M.

    2008-04-01

    Hyperspectral imaging spectroscopy has been widely used in remote sensing. However, its potential for applications in industrial and biological fields is enormous. Observation line spectrographs, based on the reflectance of the material under study in each field, can be obtained by means of an imaging spectrometer. In this way, imaging spectroscopy allows the simultaneous determination of the optical spectrum components and the spatial location of an object in a surface. A simple, small and low-cost spectrometer, such as those ones based on passive Prism-Grating-Prism (PGP) devices, is required for the abovementioned application fields. In this paper a non-intrusive and non-contact near infrared acquisition system based on a PGP spectrometer is presented. An extension to the whole near infrared range of the spectrum of a previously designed system in the Vis-NIR range has been performed. The reason under this investigation is to improve material characterization. To our knowledge, no imaging spectroscopic system based on a PGP device working in this range has been previously reported. The components of the system, its assembling, alignment and calibration procedures will be described in detail. This system can be generalized for a wide variety of applications employing a specific and adequate data processing

  1. Camera-based ratiometric fluorescence transduction of nucleic acid hybridization with reagentless signal amplification on a paper-based platform using immobilized quantum dots as donors.

    PubMed

    Noor, M Omair; Krull, Ulrich J

    2014-10-21

    Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an

  2. A fall prediction methodology for elderly based on a depth camera.

    PubMed

    Alazrai, Rami; Mowafi, Yaser; Hamad, Eyad

    2015-01-01

    With the aging of society population, efficient tracking of elderly activities of daily living (ADLs) has gained interest. Advancements of assisting computing and sensor technologies have made it possible to support elderly people to perform real-time acquisition and monitoring for emergency and medical care. In an earlier study, we proposed an anatomical-plane-based human activity representation for elderly fall detection, namely, motion-pose geometric descriptor (MPGD). In this paper, we present a prediction framework that utilizes the MPGD to construct an accumulated histograms-based representation of an ongoing human activity. The accumulated histograms of MPGDs are then used to train a set of support-vector-machine classifiers with a probabilistic output to predict fall in an ongoing human activity. Evaluation results of the proposed framework, using real case scenarios, demonstrate the efficacy of the framework in providing a feasible approach towards accurately predicting elderly falls. PMID:26737412

  3. Mare Crisium area topography - A comparison of earth-based radar and Apollo mapping camera results

    NASA Technical Reports Server (NTRS)

    Zisk, S.

    1978-01-01

    An earth-based radar topography (ERT) map has been constructed of the Mare Crisium area. Systematic and random sources of error are discussed. A comparison between the ERT map and Lunar Topographic Orthophotomaps shows a random mean discrepancy of less than 100 m between the two maps, except for small-scale (20 km or less in diameter) features, where systematic smoothing reduces the ERT elevation contrast

  4. Twenty-one degrees of freedom model based hand pose tracking using a monocular RGB camera

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jong-Il; Park, Hanhoon

    2016-01-01

    It is difficult to visually track a user's hand because of the many degrees of freedom (DOF) a hand has. For this reason, most model-based hand pose tracking methods have relied on the use of multiview images or RGB-D images. This paper proposes a model-based method that accurately tracks three-dimensional hand poses using monocular RGB images in real time. The main idea of the proposed method is to reduce hand tracking ambiguity by adopting a step-by-step estimation scheme consisting of three steps performed in consecutive order: palm pose estimation, finger yaw motion estimation, and finger pitch motion estimation. In addition, this paper proposes highly effective algorithms for each step. With the assumption that a human hand can be considered as an assemblage of articulated planes, the proposed method uses a piece-wise planar hand model which enables hand model regeneration. The hand model regeneration modifies the hand model to fit the current user's hand and improves the accuracy of the hand pose estimation results. Above all, the proposed method can operate in real time using only CPU-based processing. Consequently, it can be applied to various platforms, including egocentric vision devices such as wearable glasses. The results of several experiments conducted verify the efficiency and accuracy of the proposed method.

  5. Recent advances in digital camera optics

    NASA Astrophysics Data System (ADS)

    Ishiguro, Keizo

    2012-10-01

    The digital camera market has extremely expanded in the last ten years. The zoom lens for digital camera is especially the key determining factor of the camera body size and image quality. Its technologies have been based on several analog technological progresses including the method of aspherical lens manufacturing and the mechanism of image stabilization. Panasonic is one of the pioneers of both technologies. I will introduce the previous trend in optics of zoom lens as well as original optical technologies of Panasonic digital camera "LUMIX", and in addition optics in 3D camera system. Besides, I would like to suppose the future trend in digital cameras.

  6. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  7. Design, Synthesis, and Evaluation of an (18)F-Labeled Radiotracer Based on Celecoxib-NBD for Positron Emission Tomography (PET) Imaging of Cyclooxygenase-2 (COX-2).

    PubMed

    Kaur, Jatinder; Tietz, Ole; Bhardwaj, Atul; Marshall, Alison; Way, Jenilee; Wuest, Melinda; Wuest, Frank

    2015-10-01

    A series of novel fluorine-containing cyclooxygenase-2 (COX-2) inhibitors was designed and synthesized based on the previously reported fluorescent COX-2 imaging agent celecoxib-NBD (3; NBD=7-nitrobenzofurazan). In vitro COX-1/COX-2 inhibitory data show that N-(4-fluorobenzyl)-4-(5-p-tolyl-3-trifluoromethylpyrazol-1-yl)benzenesulfonamide (5; IC50 =0.36 μM, SI>277) and N-fluoromethyl-4-(5-p-tolyl-3-trifluoromethylpyrazol-1-yl)benzenesulfonamide (6; IC50 =0.24 μM, SI>416) are potent and selective COX-2 inhibitors. Compound 5 was selected for radiolabeling with the short-lived positron emitter fluorine-18 ((18) F) and evaluated as a positron emission tomography (PET) imaging agent. Radiotracer [(18) F]5 was analyzed in vitro and in vivo using human colorectal cancer model HCA-7. Although radiotracer uptake into COX-2-expressing HCA-7 cells was high, no evidence for COX-2-specific binding was found. Radiotracer uptake into HCA-7 tumors in vivo was low and similar to that of muscle, used as reference tissue. PMID:26287271

  8. Fuzzy logic-based approach to wavelet denoising of 3D images produced by time-of-flight cameras.

    PubMed

    Jovanov, Ljubomir; Pižurica, Aleksandra; Philips, Wilfried

    2010-10-25

    In this paper we present a new denoising method for the depth images of a 3D imaging sensor, based on the time-of-flight principle. We propose novel ways to use luminance-like information produced by a time-of flight camera along with depth images. Firstly, we propose a wavelet-based method for estimating the noise level in depth images, using luminance information. The underlying idea is that luminance carries information about the power of the optical signal reflected from the scene and is hence related to the signal-to-noise ratio for every pixel within the depth image. In this way, we can efficiently solve the difficult problem of estimating the non-stationary noise within the depth images. Secondly, we use luminance information to better restore object boundaries masked with noise in the depth images. Information from luminance images is introduced into the estimation formula through the use of fuzzy membership functions. In particular, we take the correlation between the measured depth and luminance into account, and the fact that edges (object boundaries) present in the depth image are likely to occur in the luminance image as well. The results on real 3D images show a significant improvement over the state-of-the-art in the field. PMID:21164605

  9. Paper-based three-dimensional microfluidic device for monitoring of heavy metals with a camera cell phone.

    PubMed

    Wang, Hu; Li, Ya-jie; Wei, Jun-feng; Xu, Ji-run; Wang, Yun-hua; Zheng, Guo-xia

    2014-05-01

    A 3D paper-based microfluidic device has been developed for colorimetric determination of selected heavy metals in water samples by stacking layers of wax patterned paper and double-sided adhesive tape. It has the capability of wicking fluids and distributing microliter volumes of samples from single inlet into affrays of detection zones without external pumps, thus a range of metal assays can be simply and inexpensively performed. We demonstrate a prototype of four sample inlets for up to four heavy metal assays each, with detection limits as follows: Cu (II) = 0.29 ppm, Ni(II) = 0.33 ppm, Cd (II) = 0.19 ppm, and Cr (VI) = 0.35 ppm, which provided quantitative data that were in agreement with values gained from atomic absorption. It has the ability to identify these four metals in mixtures and is immune to interferences from either nontoxic metal ions such as Na(I) and K(I) or components found in reservoir or beach water. With the incorporation of a portable detector, a camera mobile phone, this 3D paper-based microfluidic device should be useful as a simple, rapid, and on-site screening approach of heavy metals in aquatic environments. PMID:24618990

  10. [The linear hyperspectral camera rotating scan imaging geometric correction based on the precise spectral sampling].

    PubMed

    Wang, Shu-min; Zhang, Ai-wu; Hu, Shao-xing; Wang, Jing-meng; Meng, Xian-gang; Duan, Yi-hao; Sun, Wei-dong

    2015-02-01

    As the rotation speed of ground based hyperspectral imaging system is too fast in the image collection process, which exceeds the speed limitation, there is data missed in the rectified image, it shows as the_black lines. At the same time, there is serious distortion in the collected raw images, which effects the feature information classification and identification. To solve these problems, in this paper, we introduce the each component of the ground based hyperspectral imaging system at first, and give the general process of data collection. The rotation speed is controlled in data collection process, according to the image cover area of each frame and the image collection speed of the ground based hyperspectral imaging system, And then the spatial orientation model is deduced in detail combining with the star scanning angle, stop scanning angle and the minimum distance between the sensor and the scanned object etc. The oriented image is divided into grids and resampled with new spectral. The general flow of distortion image corrected is presented in this paper. Since the image spatial resolution is different between the adjacent frames, and in order to keep the highest image resolution of corrected image, the minimum ground sampling distance is employed as the grid unit to divide the geo-referenced image. Taking the spectral distortion into account caused by direct sampling method when the new uniform grids and the old uneven grids are superimposed to take the pixel value, the precise spectral sampling method based on the position distribution is proposed. The distortion image collected in Lao Si Cheng ruin which is in the Zhang Jiajie town Hunan province is corrected through the algorithm proposed on above. The features keep the original geometric characteristics. It verifies the validity of the algorithm. And we extract the spectral of different features to compute the correlation coefficient. The results show that the improved spectral sampling method is

  11. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  12. A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.

    PubMed

    Ar, Ilktan; Akgul, Yusuf Sinan

    2014-11-01

    Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained. PMID:24860037

  13. Positron binding to molecules

    NASA Astrophysics Data System (ADS)

    Danielson, J. R.

    2011-05-01

    While there is theoretical evidence that positrons can bind to atoms, calculations for molecules are much less precise. Unfortunately, there have been no measurements of positron-atom binding, due primarily to the difficulty in forming positron-atom bound states in two-body collisions. In contrast, positrons attach to molecules via Feshbach resonances (VFR) in which a vibrational mode absorbs the excess energy. Using a high-resolution positron beam, this VFR process has been studied to measure binding energies for more than 40 molecules. New measurements will be described in two areas: positron binding to relatively simple molecules, for which theoretical calculations appear to be possible; and positron binding to molecules with large permanent dipole moments, which can be compared to analogous, weakly bound electron-molecule (negative-ion) states. Binding energies range from 75 meV for CS2 (no dipole moment) to 180 meV for acetonitrile (CH3CN). Other species studied include aldehydes and ketones, which have permanent dipole moments in the range 2.5 - 3.0 debye. The measured binding energies are surprisingly large (by a factor of 10 to 100) compared to those for the analogous negative ions, and these differences will be discussed. New theoretical calculations for positron-molecule binding are in progress, and a recent result for acetonitrile will be discussed. This ability to compare theory and experiment represents a significant step in attempts to understand positron binding to matter. In collaboration with A. C. L. Jones, J. J. Gosselin, and C. M. Surko, and supported by NSF grant PHY 07-55809.

  14. PSD Camera Based Position and Posture Control of Redundant Robot Considering Contact Motion

    NASA Astrophysics Data System (ADS)

    Oda, Naoki; Kotani, Kentaro

    The paper describes a position and posture controller design based on the absolute position by external PSD vision sensor for redundant robot manipulator. The redundancy enables a potential capability to avoid obstacle while continuing given end-effector jobs under contact with middle link of manipulator. Under contact motion, the deformation due to joint torsion obtained by comparing internal and external position sensor, is actively suppressed by internal/external position hybrid controller. The selection matrix of hybrid loop is given by the function of the deformation. And the detected deformation is also utilized in the compliant motion controller for passive obstacle avoidance. The validity of the proposed method is verified by several experimental results of 3link planar redundant manipulator.

  15. Deriving hydraulic roughness from camera-based high resolution topography in field and laboratory experiments

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Neugirg, Fabian; Ebert, Louisa; Haas, Florian; Schmidt, Jürgen; Becht, Michael; Schindewolf, Marcus

    2016-04-01

    The hydraulic roughness, represented by Manning's n, is an essential input parameter in physically based soil erosion modeling. In order to acquire the roughness values for certain areas, on-site flow experiments have to be carried out. These results are influenced by the selection of the location of the test plot and are thereby based on the subjectiveness of the researchers. The study aims on the methodological development to acquire Manning's n by creating very high-resolution surface models with structure-from-motion approaches. Data acquisition took place during several field experiments in the Lainbach valley, southern Germany, and on agricultural sites in Saxony, eastern Germany, and in central Brazil. Rill and interrill conditions were simulated by flow experiments. In order to validate our findings stream velocity as an input for the manning equation was measured with coloured dye. Grain and aggregate sizes were derived by measuring distances from a best fit line to the reconstructed soil surface. Several diameters from D50 to D90 were tested with D90 showing best correlation between tracer experiments and photogrammetrically acquired data. A variety of roughness parameters were tested (standard deviation, random roughness, Garbrecht's n and D90). Best agreement in between the particle size and the hydraulic roughness was achieved with a non-linear sigmoid function and D90 rather than with the Garbrecht equation or statistical parameters. To consolidate these findings a laboratory setup was created to reproduce field data under controlled conditions, excluding unknown influences like infiltration and changes in surface morphology by erosion.

  16. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  17. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  18. A Kinect(™) camera based navigation system for percutaneous abdominal puncture.

    PubMed

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is

  19. Two low-cost digital camera-based platforms for quantitative creatinine analysis in urine.

    PubMed

    Debus, Bruno; Kirsanov, Dmitry; Yaroshenko, Irina; Sidorova, Alla; Piven, Alena; Legin, Andrey

    2015-10-01

    In clinical analysis creatinine is a routine biomarker for the assessment of renal and muscular dysfunctions. Although several techniques have been proposed for a fast and accurate quantification of creatinine in human serum or urine, most of them require expensive or complex apparatus, advanced sample preparation or skilled operators. To circumvent these issues, we propose two home-made platforms based on a CD Spectroscope (CDS) and Computer Screen Photo-assisted Technique (CSPT) for the rapid assessment of creatinine level in human urine. Both systems display a linear range (r(2) = 0.9967 and 0.9972, respectively) from 160 μmol L(-1) to 1.6 mmol L(-1) for standard creatinine solutions (n = 15) with respective detection limits of 89 μmol L(-1) and 111 μmol L(-1). Good repeatability was observed for intra-day (1.7-2.9%) and inter-day (3.6-6.5%) measurements evaluated on three consecutive days. The performance of CDS and CSPT was also validated in real human urine samples (n = 26) using capillary electrophoresis data as reference. Corresponding Partial Least-Squares (PLS) regression models provided for mean relative errors below 10% in creatinine quantification. PMID:26454461

  20. Are We Ready for Positron Emission Tomography/Computed Tomography-based Target Volume Definition in Lymphoma Radiation Therapy?

    SciTech Connect

    Yeoh, Kheng-Wei; Mikhaeel, N. George

    2013-01-01

    Fluorine-18 fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) has become indispensable for the clinical management of lymphomas. With consistent evidence that it is more accurate than anatomic imaging in the staging and response assessment of many lymphoma subtypes, its utility continues to increase. There have therefore been efforts to incorporate PET/CT data into radiation therapy decision making and in the planning process. Further, there have also been studies investigating target volume definition for radiation therapy using PET/CT data. This article will critically review the literature and ongoing studies on the above topics, examining the value and methods of adding PET/CT data to the radiation therapy treatment algorithm. We will also discuss the various challenges and the areas where more evidence is required.

  1. Experimental setup for camera-based measurements of electrically and optically stimulated luminescence of silicon solar cells and wafers.

    PubMed

    Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf

    2011-03-01

    We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions. PMID:21456750

  2. Multi-PSPMT scintillation camera

    SciTech Connect

    Pani, R.; Pellegrini, R.; Trotta, G.; Scopinaro, F.; Soluri, A.; Vincentis, G. de; Scafe, R.; Pergola, A.

    1999-06-01

    Gamma ray imaging is usually accomplished by the use of a relatively large scintillating crystal coupled to either a number of photomultipliers (PMTs) (Anger Camera) or to a single large Position Sensitive PMT (PSPMT). Recently the development of new diagnostic techniques, such as scintimammography and radio-guided surgery, have highlighted a number of significant limitations of the Anger camera in such imaging procedures. In this paper a dedicated gamma camera is proposed for clinical applications with the aim of improving image quality by utilizing detectors with an appropriate size and shape for the part of the body under examination. This novel scintillation camera is based upon an array of PSPMTs (Hamamatsu R5900-C8). The basic concept of this camera is identical to the Anger Camera with the exception of the substitution of PSPMTs for the PMTs. In this configuration it is possible to use the high resolution of the PSPMTs and still correctly position events lying between PSPMTs. In this work the test configuration is a 2 by 2 array of PSPMTs. Some advantages of this camera are: spatial resolution less than 2 mm FWHM, good linearity, thickness less than 3 cm, light weight, lower cost than equivalent area PSPMT, large detection area when coupled to scintillating arrays, small dead boundary zone (< 3 mm) and flexibility in the shape of the camera.

  3. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    SciTech Connect

    Garcia, Marie-Paule Villoing, Daphnée; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-12-15

    computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.

  4. Matching the Best Viewing Angle in Depth Cameras for Biomass Estimation Based on Poplar Seedling Geometry

    PubMed Central

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-01-01

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (−45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and −45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  5. Matching the best viewing angle in depth cameras for biomass estimation based on poplar seedling geometry.

    PubMed

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-01-01

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  6. Proactive PTZ Camera Control

    NASA Astrophysics Data System (ADS)

    Qureshi, Faisal Z.; Terzopoulos, Demetri

    We present a visual sensor network—comprising wide field-of-view (FOV) passive cameras and pan/tilt/zoom (PTZ) active cameras—capable of automatically capturing closeup video of selected pedestrians in a designated area. The passive cameras can track multiple pedestrians simultaneously and any PTZ camera can observe a single pedestrian at a time. We propose a strategy for proactive PTZ camera control where cameras plan ahead to select optimal camera assignment and handoff with respect to predefined observational goals. The passive cameras supply tracking information that is used to control the PTZ cameras.

  7. Assessment of Tumor Volumes in Skull Base Glomus Tumors Using Gluc-Lys[{sup 18}F]-TOCA Positron Emission Tomography

    SciTech Connect

    Astner, Sabrina T.; Bundschuh, Ralph A.; Beer, Ambros J.; Ziegler, Sibylle I.; Krause, Bernd J.; Schwaiger, Markus; Molls, Michael; Grosu, Anca L.; Essler, Markus

    2009-03-15

    Purpose: To assess a threshold for Gluc-Lys[{sup 18}F]-TOCA positron emission tomography (PET) in target volume delineation of glomus tumors in the skull base and to compare with MRI-based target volume delineation. Methods and Materials: The threshold for volume segmentation in the PET images was determined by a phantom study. Nine patients with a total of 11 glomus tumors underwent PET either with Gluc-Lys[{sup 18}F]-TOCA or with {sup 68}Ga-DOTATOC (in 1 case). All patients were additionally scanned by MRI. Positron emission tomography and MR images were transferred to a treatment-planning system; MR images were analyzed for lesion volume by two observers, and PET images were analyzed by a semiautomated thresholding algorithm. Results: Our phantom study revealed that 32% of the maximum standardized uptake value is an appropriate threshold for tumor segmentation in PET-based target volume delineation of gross tumors. Target volume delineation by MRI was characterized by high interobserver variability. In contrast, interobserver variability was minimal if fused PET/MRI images were used. The gross tumor volumes (GTVs) determined by PET (GTV-PET) showed a statistically significant correlation with the GTVs determined by MRI (GTV-MRI) in primary tumors; in recurrent tumors higher differences were found. The mean GTV-MRI was significantly higher than mean GTV-PET. The increase added by MRI to the common volume was due to scar tissue with strong signal enhancement on MRI. Conclusions: In patients with glomus tumors, Gluc-Lys[{sup 18}F]-TOCA PET helps to reduce interobserver variability if an appropriate threshold for tumor segmentation has been determined for institutional conditions. Especially in patients with recurrent tumors after surgery, Gluc-Lys[{sup 18}F]-TOCA PET improves the accuracy of GTV delineation.

  8. Advanced positron sources

    NASA Astrophysics Data System (ADS)

    Variola, A.

    2014-03-01

    Positron sources are a critical system for the future lepton colliders projects. Due to the large beam emittance at the production and the limitation given by the target heating and mechanical stress, the main collider parameters fixing the luminosity are constrained by the e+ sources. In this context also the damping ring design boundary conditions and the final performance are given by the injected positron beam. At present different schemes are being taken into account in order to increase the production and the capture yield of the positron sources, to reduce the impact of the deposited energy in the converter target and to increase the injection efficiency in the damping ring. The final results have a strong impact not only on the collider performance but also on its cost optimization. After a short introduction illustrating their fundamental role, the basic positron source scheme and the performance of the existing sources will be illustrated. The main innovative designs for the future colliders advanced sources will be reviewed and the different developed technologies presented. Finally the positrons-plasma R&D experiments and the futuristic proposals for positron sources will reviewed.

  9. Positrons from supernovae

    NASA Technical Reports Server (NTRS)

    Chan, Kai-Wing; Lingenfelter, Richard E.

    1993-01-01

    Positrons are produced in the ejecta of supernovae by the decay of nucleosynthetic Co-56, Ti-44, and Al-26. We calculate the probability that these positrons can survive without annihilating in the supernova ejecta, and we show that enough of these positrons should escape into the interstellar medium to account for the observed diffuse Galactic annihilation radiation. The surviving positrons are carried by the expanding ejecta into the interstellar medium where their annihilation lifetime of 10 exp 5 - 10 exp 6 yr is much longer than the average supernovae occurrence time of about 100 yr. Thus, annihilating positrons from thousands of supernovae throughout the Galaxy produce a steady diffuse flux of annihilation radiation. We further show that combining the calculated positron survival fractions and nucleosynthetic yields for current supernova models with the estimated supernova rates and the observed flux of diffuse Galactic annihilation radiation suggests that the present Galactic rate of Fe-56 nucleosynthesis is about 0.8 +/- 0.6 solar mass per 100 yr.

  10. Photon-counting gamma camera based on columnar CsI(Tl) optically coupled to a back-illuminated CCD

    PubMed Central

    Miller, Brian W.; Barber, H. Bradford; Barrett, Harrison H.; Chen, Liying; Taylor, Sean J.

    2010-01-01

    Recent advances have been made in a new class of CCD-based, single-photon-counting gamma-ray detectors which offer sub-100 μm intrinsic resolutions.1–7 These detectors show great promise in small-animal SPECT and molecular imaging and exist in a variety of configurations. Typically, a columnar CsI(Tl) scintillator or a radiography screen (Gd2O2S:Tb) is imaged onto the CCD. Gamma-ray interactions are seen as clusters of signal spread over multiple pixels. When the detector is operated in a charge-integration mode, signal spread across pixels results in spatial-resolution degradation. However, if the detector is operated in photon-counting mode, the gamma-ray interaction position can be estimated using either Anger (centroid) estimation or maximum-likelihood position estimation resulting in a substantial improvement in spatial resolution.2 Due to the low-light-level nature of the scintillation process, CCD-based gamma cameras implement an amplification stage in the CCD via electron multiplying (EMCCDs)8–10 or via an image intensifier prior to the optical path.1 We have applied ideas and techniques from previous systems to our high-resolution LumiSPECT detector.11, 12 LumiSPECT is a dual-modality optical/SPECT small-animal imaging system which was originally designed to operate in charge-integration mode. It employs a cryogenically cooled, high-quantum-efficiency, back-illuminated large-format CCD and operates in single-photon-counting mode without any intermediate amplification process. Operating in photon-counting mode, the detector has an intrinsic spatial resolution of 64 μm compared to 134 μm in integrating mode. PMID:20890397

  11. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  12. Positron annihilation studies of organic superconductivity

    SciTech Connect

    Yen, H.L.; Lou, Y.; Ali, E.H.

    1994-09-01

    The positron lifetimes of two organic superconductors, {kappa}-(ET){sub 2}Cu(NCS){sub 2} and {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br, are measured as a function of temperature across {Tc}. A drop of positron lifetime below {Tc} is observed. Positron-electron momentum densities are measured by using 2D-ACAR to search for the Fermi surface in {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br. Positron density distributions and positron-electron overlaps are calculated by using the orthogonalized linear combination atomic orbital (OLCAO) method to interprete the temperature dependence due to the local charge transfer which is inferred to relate to the superconducting transition. 2D-ACAR results in {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br are compared with theoretical band calculations based on a first-principles local density approximation. Importance of performing accurate band calculations for the interpretation of positron annihilation data is emphasized.

  13. The NEAT Camera Project

    NASA Technical Reports Server (NTRS)

    Jr., Ray L. Newburn

    1995-01-01

    The NEAT (Near Earth Asteroid Tracking) camera system consists of a camera head with a 6.3 cm square 4096 x 4096 pixel CCD, fast electronics, and a Sun Sparc 20 data and control computer with dual CPUs, 256 Mbytes of memory, and 36 Gbytes of hard disk. The system was designed for optimum use with an Air Force GEODSS (Ground-based Electro-Optical Deep Space Surveillance) telescope. The GEODSS telescopes have 1 m f/2.15 objectives of the Ritchey-Chretian type, designed originally for satellite tracking. Installation of NEAT began July 25 at the Air Force Facility on Haleakala, a 3000 m peak on Maui in Hawaii.

  14. A long-range camera based on an HD MCT array of 12μm pixels

    NASA Astrophysics Data System (ADS)

    Davy, D.; Ashley, S.; Davison, B.; Ashcroft, A.; McEwen, R. K.; Moore, R.

    2014-06-01

    The development of a new thermal imaging camera, for long range surveillance applications, is described together with the enabling technology. Previous publications have described the development of large arrays of 12μm pixels using Metal Organic Vapour Phase Epitaxy (MOVPE) grown Mercury Cadmium Telluride (MCT) for wide area surveillance applications. This technology has been leveraged to produce the low cost 1280×720 pixel Medium Wave IR focal plane array at the core of the new camera. Also described is the newly developed, high performance, ×12 continuous zoom lens which, together with the detector, achieves an Instantaneous Field of View (IFOV) of 12.5μrad/pixel enabling long detection, recognition and identification ranges. Novel image processing features, including the turbulence mitigation algorithms deployed in the camera processing electronics, are also addressed. Resultant imagery and performance will be presented.

  15. Ground-based search for the brightest transiting planets with the Multi-site All-Sky CAmeRA: MASCARA

    NASA Astrophysics Data System (ADS)

    Snellen, Ignas A. G.; Stuik, Remko; Navarro, Ramon; Bettonvil, Felix; Kenworthy, Matthew; de Mooij, Ernst; Otten, Gilles; ter Horst, Rik; le Poole, Rudolf

    2012-09-01

    The Multi-site All-sky CAmeRA MASCARA is an instrument concept consisting of several stations across the globe, with each station containing a battery of low-cost cameras to monitor the near-entire sky at each location. Once all stations have been installed, MASCARA will be able to provide a nearly 24-hr coverage of the complete dark sky, down to magnitude 8, at sub-minute cadence. Its purpose is to find the brightest transiting exoplanet systems, expected in the V=4-8 magnitude range - currently not probed by space- or ground-based surveys. The bright/nearby transiting planet systems, which MASCARA will discover, will be the key targets for detailed planet atmosphere observations. We present studies on the initial design of a MASCARA station, including the camera housing, domes, and computer equipment, and on the photometric stability of low-cost cameras showing that a precision of 0.3-1% per hour can be readily achieved. We plan to roll out the first MASCARA station before the end of 2013. A 5-station MASCARA can within two years discover up to a dozen of the brightest transiting planet systems in the sky.

  16. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    NASA Astrophysics Data System (ADS)

    Harrild, Martin; Webley, Peter; Dehn, Jonathan

    2015-04-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  17. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    NASA Astrophysics Data System (ADS)

    Harrild, M.; Webley, P.; Dehn, J.

    2014-12-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  18. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  19. Measured performance of a low-cost thermal infrared pushbroom camera based on uncooled microbolometer FPA for space applications

    NASA Astrophysics Data System (ADS)

    Geoffray, Herve; Guerin, Francois

    2001-12-01

    The FUEGO system is a remote sensing satellite constellation aimed at providing early fire alarms throughout the forest fire risk area of Europe and other temperate areas. An excellent revisit time (<16 min.) can be achieved from a low earth orbit constellation of 12 mini-satellites. Each mini-satellite carries infrared sensors in MIR, TIR, and VIS/NIR bands operating in push-broom mode and a depointing mirror to cover a large swath. This can ensure early detection of fire outbreaks with a 2500 km swath. This paper presents the thermal infrared (TIR) camera characteristics. The main purposes of the TIR channels are the discrimination of clouds and detection of forest fire false alarms during low light or night operation. The main requirements for the TIR camera are: spectral range 8 - 12 micrometers ; FOV equals +/- 7.2 degree(s) (177 km on ground); ground resolution 369 m; NETD < 0.4 K 300 K (blackbody temperature); large dynamic range of radiance (equivalent blackbody temperature 240 K to 380 K). The TIR pushbroom camera is built around an off-the- shelf SOFRADIR microbolometer FPA of 320 X 240 elements with a pitch of 45 micrometers . The focal plane is uncooled and operates at T equals 303 K. The paper details the tests performed on the engineering model of the camera. More particularly, radiometric characterization and MTF measurement are described. The demonstrated camera performance together with the low cost and complexity of the camera offer a large field of opportunities for future space applications.

  20. Novel positioning method using Gaussian mixture model for a monolithic scintillator-based detector in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Bae, Seungbin; Lee, Kisung; Seo, Changwoo; Kim, Jungmin; Joo, Sung-Kwan; Joung, Jinhun

    2011-09-01

    We developed a high precision position decoding method for a positron emission tomography (PET) detector that consists of a thick slab scintillator coupled with a multichannel photomultiplier tube (PMT). The DETECT2000 simulation package was used to validate light response characteristics for a 48.8 mm×48.8 mm×10 mm slab of lutetium oxyorthosilicate coupled to a 64 channel PMT. The data are then combined to produce light collection histograms. We employed a Gaussian mixture model (GMM) to parameterize the composite light response with multiple Gaussian mixtures. In the training step, light photons acquired by N PMT channels was used as an N-dimensional feature vector and were fed into a GMM training model to generate optimal parameters for M mixtures. In the positioning step, we decoded the spatial locations of incident photons by evaluating a sample feature vector with respect to the trained mixture parameters. The average spatial resolutions after positioning with four mixtures were 1.1 mm full width at half maximum (FWHM) at the corner and 1.0 mm FWHM at the center section. This indicates that the proposed algorithm achieved high performance in both spatial resolution and positioning bias, especially at the corner section of the detector.

  1. The calibration of cellphone camera-based colorimetric sensor array and its application in the determination of glucose in urine.

    PubMed

    Jia, Ming-Yan; Wu, Qiong-Shui; Li, Hui; Zhang, Yu; Guan, Ya-Feng; Feng, Liang

    2015-12-15

    In this work, a novel approach that can calibrate the colors obtained with a cellphone camera was proposed for the colorimetric sensor array. The variations of ambient light conditions, imaging positions and even cellphone brands could all be compensated via taking the black and white backgrounds of the sensor array as references, thereby yielding accurate measurements. The proposed calibration approach was successfully applied to the detection of glucose in urine by a colorimetric sensor array. Snapshots of the glucose sensor array by a cellphone camera were calibrated by the proposed compensation method and the urine samples at different glucose concentrations were well discriminated with no confusion after a hierarchical clustering analysis. PMID:26275712

  2. Fine scale structures of pulsating auroras in the early recovery phase of substorm using ground-based EMCCD camera

    NASA Astrophysics Data System (ADS)

    Nishiyama, Takanori; Sakanoi, Takeshi; Miyoshi, Yoshizumi; Kataoka, Ryuho; Hampton, Donald; Katoh, Yuto; Asamura, Kazushi; Okano, Shoichi

    2012-10-01

    We have carried out ground-based observations, optimized to temporal and spatial characteristics of pulsating auroras (PAs) in the micro/meso scale, using an electron multiplying charge coupled device (EMCCD) camera with a wide field of view corresponding to 100 × 100 km at an altitude of 110 km and a high sampling rate up to 100 frames per second. We focus on transient PAs propagating southward around 1100 UT, in the early recovery phase of the substorm, on 4th March 2011. Three independent patches (PA1-3) each with different periods between 4 and 7 s were observed, which means that the periodicity was not explained by the electron bounce motion and strongly depended on local plasma conditions in the magnetosphere or in the ionosphere. One more insight is that only PA1 had also a sharp peak of modulations around 1.5 Hz, with a narrow frequency width of 0.30 Hz, and the strong modulations existed as a small spot in the center of PA1. We have also conducted cross spectrum analysis and have obtained coherence and phase distributions for auroral variations between 0.1 and 3.0 Hz. The results indicated that low frequency variations from 0.2 to 0.5 Hz inside PA1-3 propagated as a collective motion in well-defined directions. The estimated horizontal propagation velocities ranged from 50 to 120 km/s at the auroral altitude. The velocities are almost consistent with the Alfven speed at the magnetic equator, which suggests that compressional waves have an effect on PA via modulations of the ambient plasma environment.

  3. Positron beam position measurement for a beam containing both positrons and electrons

    SciTech Connect

    Sereno, N.S.; Fuja, R.

    1996-08-01

    Positron beam position measurement for the Advanced Photon Source (APS) linac beam is affected by the presence of electrons that are also captured and accelerated along with the positrons. This paper presents a method of measuring positron position in a beam consisting of alternating bunches of positrons and electrons. The method is based on Fourier analysis of a stripline signal at the bunching and first harmonic frequencies. In the presence of a mixed species beam, a certain linear combination of bunching and first harmonic signals depends only on the position and charge of one specie of particle. A formula is derived for the stripline signal at all harmonics of the bunching frequency and is used to compute expected signal power at the bunching and first harmonic frequencies for typical electron and positron bunch charges. The stripline is calibrated by measuring the signal power content at the bunching and first harmonic frequencies for a single species beam. A circuit is presented that will be used with an APS positron linac stripline beam position monitor to detect the bunching and first harmonic signals for a beam of positrons and electrons.

  4. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  5. Ground-based detection of nighttime clouds above Manila Observatory (14.64°N, 121.07°E) using a digital camera.

    PubMed

    Gacal, Glenn Franco B; Antioquia, Carlo; Lagrosas, Nofel

    2016-08-01

    Ground-based cloud detection at nighttime is achieved by using cameras, lidars, and ceilometers. Despite these numerous instruments gathering cloud data, there is still an acknowledged scarcity of information on quantified local cloud cover, especially at nighttime. In this study, a digital camera is used to continuously collect images near the sky zenith at nighttime in an urban environment. An algorithm is developed to analyze pixel values of images of nighttime clouds. A minimum threshold pixel value of 17 is assigned to determine cloud occurrence. The algorithm uses temporal averaging to estimate the cloud fraction based on the results within the limited field of view. The analysis of the data from the months of January, February, and March 2015 shows that cloud occurrence is low during the months with relatively lower minimum temperature (January and February), while cloud occurrence during the warmer month (March) increases. PMID:27505386

  6. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  7. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  8. Modelisation de photodetecteurs a base de matrices de diodes avalanche monophotoniques pour tomographie d'emission par positrons

    NASA Astrophysics Data System (ADS)

    Corbeil Therrien, Audrey

    La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a

  9. HHEBBES! All sky camera system: status update

    NASA Astrophysics Data System (ADS)

    Bettonvil, F.

    2015-01-01

    A status update is given of the HHEBBES! All sky camera system. HHEBBES!, an automatic camera for capturing bright meteor trails, is based on a DSLR camera and a Liquid Crystal chopper for measuring the angular velocity. Purpose of the system is to a) recover meteorites; b) identify origin/parental bodies. In 2015, two new cameras were rolled out: BINGO! -alike HHEBBES! also in The Netherlands-, and POgLED, in Serbia. BINGO! is a first camera equipped with a longer focal length fisheye lens, to further increase the accuracy. Several minor improvements have been done and the data reduction pipeline was used for processing two prominent Dutch fireballs.

  10. Positron emission tomography

    NASA Astrophysics Data System (ADS)

    Yamamoto, Y. Lucas; Thompson, Christopher J.; Diksic, Mirko; Meyer, Ernest; Feindel, William H.

    One of the most exciting new technologies introduced in the last 10 yr is positron emission tomography (PET). PET provides quantitative, three-dimensional images for the study of specific biochemical and physiological processes in the human body. This approach is analogous to quantitative in-vivo autoradiography but has the added advantage of permitting non-invasive in vivo studies. PET scanning requires a small cyclotron to produce short-lived positron emitting isotopes such as oxygen-15, carbon-11, nitrogen-13 and fluorine-18. Proper radiochemical facilities and advanced computer equipment are also needed. Most important, PET requires a multidisciplinary scientific team of physicists, radiochemists, mathematicians, biochemists and physicians. This review analyzes the most recent trends in the imaging technology, radiochemistry, methodology and clinical applications of positron emission tomography.

  11. Alternative positron-target design for electron-positron colliders

    SciTech Connect

    Donahue, R.J. ); Nelson, W.R. )

    1991-04-01

    Current electron-positron linear colliders are limited in luminosity by the number of positrons which can be generated from targets presently used. This paper examines the possibility of using an alternate wire-target geometry for the production of positrons via an electron-induced electromagnetic cascade shower. 39 refs., 38 figs., 5 tabs.

  12. Production of a positron microprobe using a transmission remoderator.

    PubMed

    Fujinami, Masanori; Jinno, Satoshi; Fukuzumi, Masafumi; Kawaguchi, Takumi; Oguma, Koichi; Akahane, Takashi

    2008-01-01

    A production method for a positron microprobe using a beta+-decay radioisotope (22Na) source has been investigated. When a magnetically guided positron beam was extracted from the magnetic field, the combination of an extraction coil and a magnetic lens enabled us to focus the positron beam by a factor of 10 and to achieve a high transport efficiency (71%). A 150-nm-thick Ni(100) thin film was mounted at the focal point of the magnetic lens and was used as a remoderator for brightness enhancement in a transmission geometry. The remoderated positrons were accelerated by an electrostatic lens and focused on the target by an objective magnetic lens. As a result, a 4-mm-diameter positron beam could be transformed into a microprobe of 60 microm or less with 4.2% total efficiency. The S parameter profile obtained by a single-line scan of a test specimen coincided well with the defect distribution. This technique for a positron microprobe is available to an accelerator-based high-intensity positron source and allows 3-dimensional vacancy-type defect analysis and a positron source for a transmission positron microscope. PMID:18187852

  13. An innovative SiPM-based camera for gamma-ray astronomy with the small size telescopes of the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Schioppa, E. J.; Heller, M.; Troyano Pujadas, I.; della Volpe, D.; Favre, Y.; Montaruli, T.; Zietara, K.; Kasperek, J.; Marszalek, A.; Rajda, P.

    2016-01-01

    A prototype camera for one of the Cherenkov Telescope Array (CTA) projects for the small size telescopes, the single mirror Small Size Telescope (SST-1M), has been designed and is under construction. The camera is a hexagonal matrix of 1296 large area (95 mm2) hexagonal silicon photomultipliers. The sensors are grouped into 108 modules of 12 pixels each, hosting a preamplifier board and a slow-control board. Among its various functions, this latter implements a compensation logic that adjusts the bias voltage of each sensor as a function of temperature. The fully digital readout and trigger system, DigiCam, is based on the latest generation of FPGAs, featuring a high number of high speed I/O interfaces, allowing high data transfer rates in an extremely compact design.

  14. The future of space imaging. Report of a community-based study of an advanced camera for the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Brown, Robert A. (Editor)

    1993-01-01

    The scientific and technical basis for an Advanced Camera (AC) for the Hubble Space Telescope (HST) is discussed. In March 1992, the NASA Program Scientist for HST invited the Space Telescope Science Institute to conduct a community-based study of an AC, which would be installed on a scheduled HST servicing mission in 1999. The study had three phases: a broad community survey of views on candidate science program and required performance of the AC, an analysis of technical issues relating to its implementation, and a panel of experts to formulate conclusions and prioritize recommendations. From the assessment of the imaging tasks astronomers have proposed for or desired from HST, we believe the most valuable 1999 instrument would be a camera with both near ultraviolet/optical (NUVO) and far ultraviolet (FUV) sensitivity, and with both wide field and high resolution options.

  15. Effect of {sup 11}C-Methionine-Positron Emission Tomography on Gross Tumor Volume Delineation in Stereotactic Radiotherapy of Skull Base Meningiomas

    SciTech Connect

    Astner, Sabrina T. Dobrei-Ciuchendea, Mihaela; Essler, Markus; Bundschuh, Ralf A.; Sai, Heitetsu; Schwaiger, Markus; Molls, Michael; Weber, Wolfgang A.; Grosu, Anca-Ligia

    2008-11-15

    Purpose: To evaluate the effect of trimodal image fusion using computed tomography (CT), magnetic resonance imaging (MRI) and {sup 11}C-methionine positron emission tomography (MET-PET) for gross tumor volume delineation in fractionated stereotactic radiotherapy of skull base meningiomas. Patients and Methods: In 32 patients with skull base meningiomas, the gross tumor volume (GTV) was outlined on CT scans fused to contrast-enhanced MRI (GTV-MRI/CT). A second GTV, encompassing the MET-PET positive region only (GTV-PET), was generated. The additional information obtained by MET-PET concerning the GTV delineation was evaluated using the PET/CT/MRI co-registered images. The sizes of the overlapping regions of GTV-MRI/CT and GTV-PET were calculated and the amounts of additional volumes added by the complementing modality determined. Results: The addition of MET-PET was beneficial for GTV delineation in all but 3 patients. MET-PET detected small tumor portions with a mean volume of 1.6 {+-} 1.7 cm{sup 3} that were not identified by CT or MRI. The mean percentage of enlargement of the GTV using MET-PET as an additional imaging method was 9.4% {+-} 10.7%. Conclusions: Our data have demonstrated that integration of MET-PET in radiotherapy planning of skull base meningiomas can influence the GTV, possibly resulting in an increase, as well as in a decrease.

  16. Model-based correction for scatter and tailing effects in simultaneous 99mTc and 123I imaging for a CdZnTe cardiac SPECT camera

    NASA Astrophysics Data System (ADS)

    Holstensson, M.; Erlandsson, K.; Poludniowski, G.; Ben-Haim, S.; Hutton, B. F.

    2015-04-01

    An advantage of semiconductor-based dedicated cardiac single photon emission computed tomography (SPECT) cameras when compared to conventional Anger cameras is superior energy resolution. This provides the potential for improved separation of the photopeaks in dual radionuclide imaging, such as combined use of 99mTc and 123I . There is, however, the added complexity of tailing effects in the detectors that must be accounted for. In this paper we present a model-based correction algorithm which extracts the useful primary counts of 99mTc and 123I from projection data. Equations describing the in-patient scatter and tailing effects in the detectors are iteratively solved for both radionuclides simultaneously using a maximum a posteriori probability algorithm with one-step-late evaluation. Energy window-dependent parameters for the equations describing in-patient scatter are estimated using Monte Carlo simulations. Parameters for the equations describing tailing effects are estimated using virtually scatter-free experimental measurements on a dedicated cardiac SPECT camera with CdZnTe-detectors. When applied to a phantom study with both 99mTc and 123I, results show that the estimated spatial distribution of events from 99mTc in the 99mTc photopeak energy window is very similar to that measured in a single 99mTc phantom study. The extracted images of primary events display increased cold lesion contrasts for both 99mTc and 123I.

  17. Moisture determination in composite materials using positron lifetime techniques

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Holt, W. R.; Mock, W., Jr.

    1980-01-01

    A technique was developed which has the potential of providing information on the moisture content as well as its depth in the specimen. This technique was based on the dependence of positron lifetime on the moisture content of the composite specimen. The positron lifetime technique of moisture determination and the results of the initial studies are described.

  18. High granularity tracker based on a Triple-GEM optically read by a CMOS-based camera

    NASA Astrophysics Data System (ADS)

    Marafini, M.; Patera, V.; Pinci, D.; Sarti, A.; Sciubba, A.; Spiriti, E.

    2015-12-01

    The detection of photons produced during the avalanche development in gas chambers has been the subject of detailed studies in the past. The great progresses achieved in last years in the performance of micro-pattern gas detectors on one side and of photo-sensors on the other provide the possibility of making high granularity and very sensitive particle trackers. In this paper, the results obtained with a triple-GEM structure read-out by a CMOS based sensor are described. The use of an He/CF4 (60/40) gas mixture and a detailed optimization of the electric fields made possible to obtain a very high GEM light yield. About 80 photons per primary electron were detected by the sensor resulting in a very good capability of tracking both muons from cosmic rays and electrons from natural radioactivity.

  19. Characterization of InGaAs-based cameras for astronomical applications using a new VIS-NIR-SWIR detector test bench

    NASA Astrophysics Data System (ADS)

    Schindler, Karsten; Wolf, Jürgen; Krabbe, Alfred

    2014-07-01

    A new test bench for detector and camera characterization in the visible and near-infrared spectral range between 350 -2500 nm has been setup at the Max Planck Institute for Solar System Research (MPS). The detector under study is illuminated by an integrating sphere that is fed by a Czerny-Turner monochromator with quasi-monochromatic light. A quartz tungsten halogen lamp is used as a light source for the monochromator. Si- and InGaAs-based photodiodes have been calibrated against secondary reference standards at PTB (Germany), NPL (UK) and NRC (Canada) for precise spectral flux measurements. The test bench allows measurements of fundamental detector properties such as linearity of response, conversion gain, full well capacity, quantum efficiency (QE), fixed pattern noise and pixel response non-uniformity. The article will focus on the commissioning of the test bench and subsequent performance evaluation and characterization of a commercial camera system with a 640 x 480 InGaAs-detector, sensitive between 900 to 1650 nm. The study aimed at the potential use of InGaAs cameras in ground-based and airborne astronomical observations or as target acquisition and tracking cameras in the NIR supporting infrared observations at longer wavelengths, e.g. on SOFIA. An intended future application of the test bench in combination with an appropriate test dewar is the characterization of focal plane assemblies for imaging spectrometers on spacecraft missions, such as the VIS-SWIR channel of MAJIS, the Moons and Jupiter Imaging Spectrometer aboard JUICE (Jupiter Icy Moons Explorer).

  20. Perceptual Color Characterization of Cameras

    PubMed Central

    Vazquez-Corral, Javier; Connah, David; Bertalmío, Marcelo

    2014-01-01

    Color camera characterization, mapping outputs from the camera sensors to an independent color space, such as XY Z, is an important step in the camera processing pipeline. Until now, this procedure has been primarily solved by using a 3 × 3 matrix obtained via a least-squares optimization. In this paper, we propose to use the spherical sampling method, recently published by Finlayson et al., to perform a perceptual color characterization. In particular, we search for the 3 × 3 matrix that minimizes three different perceptual errors, one pixel based and two spatially based. For the pixel-based case, we minimize the CIE ΔE error, while for the spatial-based case, we minimize both the S-CIELAB error and the CID error measure. Our results demonstrate an improvement of approximately 3% for the ΔE error, 7% for the S-CIELAB error and 13% for the CID error measures. PMID:25490586