Science.gov

Sample records for camera based positron

  1. Detection of occult disease in breast cancer using fluorodeoxyglucose camera-based positron emission tomography.

    PubMed

    Pecking, A P; Mechelany-Corone, C; Bertrand-Kermorgant, F; Alberini, J L; Floiras, J L; Goupil, A; Pichon, M F

    2001-10-01

    An isolated increase of blood tumor marker CA 15.3 in breast cancer is considered a sensitive indicator for occult metastatic disease but by itself is not sufficient for initiating therapeutic intervention. We investigated the potential of camera-based positron emission tomography (PET) imaging using [18F]-fluorodeoxyglucose (FDG) to detect clinically occult recurrences in 132 female patients (age, 35-69 years) treated for breast cancer, all presenting with an isolated increase in blood tumor marker CA 15.3 without any other evidence of metastatic disease. FDG results were correlated to pathology results or to a sequentially guided conventional imaging method. One hundred nineteen patients were eligible for correlations. Positive FDG scans were obtained for 106 patients, including 89 with a single lesion and 17 with 2 or more lesion. There were 92 true-positive and 14 false-positive cases, 10 of which became true positive within 1 year. Among the 13 negative cases, 7 were false negative and 6 were true negative. Camera-based PET using FDG has successfully identified clinically occult disease with an overall sensitivity of 93.6% and a positive predictive value of 96.2%. The smallest detected size was 6 mm for a lymph node metastasis (tumor to nontumor ratio, 4:2). FDG camera-based PET localized tumors in 85.7% of cases suspected for clinically occult metastatic disease on the basis of a significant increase in blood tumor marker. A positive FDG scan associated with an elevated CA 15.3 level is most consistent with metastatic relapse of breast cancer.

  2. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  3. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  4. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  5. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  6. 21 CFR 892.1110 - Positron camera.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Positron camera. 892.1110 Section 892.1110 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES RADIOLOGY DEVICES Diagnostic Devices § 892.1110 Positron camera. (a) Identification. A...

  7. Bismuth germanate as a potential scintillation detector in positron cameras.

    PubMed

    Cho, Z H; Farukhi, M R

    1977-08-01

    Timing and energy resolutions of the bismuth germanate (Bi4Ge3O12) scintillation crystals were studied, with particular respect to a positron-camera application. In comparison with the NaI(Tl) system, the detection efficiency for annihilation radiation is more than triple, and coincidence detection efficiency is more than ten times as good. This paper explores the properties of the new scintillator material and their bearing on the spatial resolution and the efficiency of coincidence detection in positron cameras with stationary ring detectors.

  8. Investigational study of iodine-124 with a positron camera

    SciTech Connect

    Lambrecht, R.M.; Woodhouse, N.; Phillips, R.; Wolczak, D.; Qureshi, A.; Reyes, E.D.; Graser, C.; Al-Yanbawi, S.; Al-Rabiah, A.; Meyer, W.

    1988-01-01

    A case is presented where I-124 produced by a clinical cyclotron was used with a positron emission tomography camera for clinical usage. This represents the first report of the utilization of this modality with this radionuclide. We feel the increased spatial resolution of PET should be of value in looking at thyroid disease.

  9. Pre-clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2010-10-01

    04-1-0594 TITLE: Pre-clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices PRINCIPAL...2004 - 20 SEP 2010 4. TITLE AND SUBTITLE Pre-clinical and clinical evaluation of high resolution, mobile gamma camera and positron imaging devices...a compact and mobile gamma and positron imaging camera . This imaging device has several advantages over conventional systems: (1) greater

  10. A new gamma camera for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Schotanus, Paul

    1988-06-01

    The detection of annihilation radiation employing radiation absorbed in a barium fluoride (BaF2) crystal is described. The resulting scintillation light is detected in a multiwire proportional chamber filled with a photosensitive vapor. The use of a high density fast scintillator with a low pressure wire chamber offers a good detection efficiency and permits high count rates because of the small dead time. The physical background of the detection mechanism is explored and the performance parameters of a gamma camera using this principle are determined. The scintillation mechanism and physical characteristics of the BaF2 scintillator are examined. Ultraviolet scintillation materials consisting of rare earth doped fluorides are introduced.

  11. Design considerations for a high-spatial-resolution positron camera with dense-drift-space MWPC's

    NASA Astrophysics Data System (ADS)

    Delguerra, A.; Perez-Mendez, V.; Schwartz, G.; Nelson, W. R.

    1982-10-01

    A multiplane Positron Camera is proposed, made of six MWPC modules arranged to form the lateral surface of a hexagonal prism. Each module (50 x 50 sq cm) has a 2 cm thick lead-glass tube converter on both sides of a MWPC pressurized to 2 atm. Experimental measurements are presented to show how to reduce the parallax error by determining in which of the two converter layers the photon has interacted. The results of a detailed Monte Carlo calculation for the efficiency of this type of converter are shown to be in excellent agreement with the experimental measurements. The expected performance of the Positron Camera is presented: a true coincidence rate of 56,000 counts/s (with an equal accidental coincidence rate and a 30% Compton scatter contamination) and a spatial resolution better than 5.0 mm (FWHM) for a 400 micron Ci pointlike source embedded in a 10 cm radius water phantom.

  12. Development of a high resolution beta camera for a direct measurement of positron distribution on brain surface

    SciTech Connect

    Yamamoto, S.; Seki, C.; Kashikura, K.

    1996-12-31

    We have developed and tested a high resolution beta camera for a direct measurement of positron distribution on brain surface of animals. The beta camera consists of a thin CaF{sub 2}(Eu) scintillator, a tapered fiber optics plate (taper fiber) and a position sensitive photomultiplier tube (PSPMT). The taper fiber is the key component of the camera. We have developed two types of beta cameras. One is 20mm diameter field of view camera for imaging brain surface of cats. The other is 10mm diameter camera for that of rats. Spatial resolutions of beta camera for cats and rats were 0.8mm FWHM and 0.5mm FWHM, respectively. We confirmed that developed beta cameras may overcome the limitation of the spatial resolution of the positron emission tomography (PET).

  13. Figures of merit for different detector configurations utilized in high resolution positron cameras

    SciTech Connect

    Eriksson, L.; Bergstrom, M.; Rohm, C.; Holte, S.; Kesselberg, M.

    1986-02-01

    A new positron camera system is currently being designed. The goal is an instrument that can measure the whole brain with a spatial resolution of 5 mm FWHM in all directions. In addition to the high spatial resolution, the system must be able to handle count rates of 0.5 MHz or more in order to perform accurate fast dynamic function studies such as the determination of cerebral blood flow and cerebral oxygen consumption following a rapid bolus. An overall spatial resolution of 5 mm requires crystal dimensions of 6 x 6 x L mm/sup 3/, or less, L being the length of the crystal. Timing and energy requirements necessitate high performance photomultipliers. The identification of the small size scintillator crystals can currently only be handled in schemes based on the Anger technique, in the future possibly with photodiodes. In the present work different crystal identification schemes have been investigated. The investigations have involved efficiency measurements of different scintillators, line spread function studies and the evaluation of different coding schemes in order to identify small crystals.

  14. Design of POSICAM: A high resolution multislice whole body positron camera

    SciTech Connect

    Mullani, N.A.; Wong, W.H.; Hartz, R.K.; Bristow, D.; Gaeta, J.M.; Yerian, K.; Adler, S.; Gould, K.L.

    1985-01-01

    A high resolution (6mm), multislice (21) whole body positron camera has been designed with innovative detector and septa arrangement for 3-D imaging and tracer quantitation. An object of interest such as the brain and the heart is optimally imaged by the 21 simultaneous image planes which have 12 mm resolution and are separated by 5.5 mm to provide adequate sampling in the axial direction. The detector geometry and the electronics are flexible enough to allow BaF/sub 2/, BGO, GSO or time of flight BaF/sub 2/ scintillators. The mechanical gantry has been designed for clinical applications and incorporates several features for patient handling and comfort. A large patient opening of 58 cm diameter with a tilt of +-30/sup 0/ and rotation of +-20/sup 0/ permit imaging from different positions without moving the patient. Multiprocessor computing systems and user-friendly software make the POSICAM a powerful 3-D imaging device. 7 figs.

  15. Optimization of positrons generation based on laser wakefield electron acceleration

    NASA Astrophysics Data System (ADS)

    Wu, Yuchi; Han, Dan; Zhang, Tiankui; Dong, Kegong; Zhu, Bin; Yan, Yonghong; Gu, Yuqiu

    2016-08-01

    Laser based positron represents a new particle source with short pulse duration and high charge density. Positron production based on laser wakefield electron acceleration (LWFA) has been investigated theoretically in this paper. Analytical expressions for positron spectra and yield have been obtained through a combination of LWFA and cascade shower theories. The maximum positron yield and corresponding converter thickness have been optimized as a function of driven laser power. Under the optimal condition, high energy (>100 MeV ) positron yield up to 5 ×1011 can be produced by high power femtosecond lasers at ELI-NP. The percentage of positrons shows that a quasineutral electron-positron jet can be generated by setting the converter thickness greater than 5 radiation lengths.

  16. First platinum moderated positron beam based on neutron capture

    NASA Astrophysics Data System (ADS)

    Hugenschmidt, C.; Kögel, G.; Repper, R.; Schreckenbach, K.; Sperr, P.; Triftshäuser, W.

    2002-12-01

    A positron beam based on absorption of high energy prompt γ-rays from thermal neutron capture in 113Cd was installed at a neutron guide of the high flux reactor at the ILL in Grenoble. Measurements were performed for various source geometries, dependent on converter mass, moderator surface and extraction voltages. The results lead to an optimised design of the in-pile positron source which will be implemented at the Munich research reactor FRM-II. The positron source consists of platinum foils acting as γ-e +e --converter and positron moderator. Due to the negative positron work function moderation in heated platinum leads to emission of monoenergetic positrons. The positron work function of polycrystalline platinum was determined to 1.95(5) eV. After acceleration to several keV by four electrical lenses the beam was magnetically guided in a solenoid field of 7.5 mT leading to a NaI-detector in order to detect the 511 keV γ-radiation of the annihilating positrons. The positron beam with a diameter of less than 20 mm yielded an intensity of 3.1×10 4 moderated positrons per second. The total moderation efficiency of the positron source was about ɛ=1.06(16)×10 -4. Within the first 20 h of operation a degradation of the moderation efficiency of 30% was observed. An annealing procedure at 873 K in air recovers the platinum moderator.

  17. Industrial positron-based imaging: Principles and applications

    NASA Astrophysics Data System (ADS)

    Parker, D. J.; Hawkesworth, M. R.; Broadbent, C. J.; Fowles, P.; Fryer, T. D.; McNeil, P. A.

    1994-09-01

    Positron Emission Tomography (PET) has great potential as a non-invasive flow imaging technique in engineering, since 511 keV gamma-rays can penetrate a considerable thickness of (e.g.) steel. The RAL/Birmingham multiwire positron camera was constructed in 1984, with the initial goal of observing the lubricant distribution in operating aero-engines, automotive engines and gearboxes, and has since been used in a variety of industrial fields. The major limitation of the camera for conventional tomographic PET studies is its restricted logging rate, which limits the frequency with which images can be acquired. Tracking a single small positron-emitting tracer particle provides a more powerful means of observing high speed motion using such a camera. Following a brief review of the use of conventional PET in engineering, and the capabilities of the Birmingham camera, this paper describes recent developments in the Positron Emission Particle Tracking (PEPT) technique, and compares the results obtainable by PET and PEPT using, as an example, a study of axial diffusion of particles in a rolling cylinder.

  18. Monoenergetic positron beam at the reactor based positron source at FRM-II

    NASA Astrophysics Data System (ADS)

    Hugenschmidt, C.; Kögel, G.; Repper, R.; Schreckenbach, K.; Sperr, P.; Straßer, B.; Triftshäuser, W.

    2002-05-01

    The principle of the in-pile positron source at the Munich research reactor FRM-II is based on absorption of high energy prompt γ-rays from thermal neutron capture in 113Cd. For this purpose, a cadmium cap is placed inside the tip of the inclined beam tube SR-11 in the moderator tank of the reactor, where an undisturbed thermal neutron flux up to 2×10 14n cm-2 s-1 is expected. Inside the cadmium cap a structure of platinum foils is placed for converting high energy γ-radiation into positron-electron pairs. Due to the negative positron work function, moderation in annealed platinum leads to emission of monoenergetic positrons. Therefore, platinum will also be used as moderator, since its moderation property seems to yield long-term stability under reactor conditions and it is much easier to handle than tungsten. Model calculations were performed with SIMION-7.0w to optimise geometry and potential of Pt-foils and electrical lenses. It could be shown that the potentials between the Pt-foils must be chosen in the range of 1-10 V to extract moderated positrons. After successive acceleration to 5 keV by four electrical lenses the beam is magnetically guided in a solenoid field of 7.5 mT resulting in a beam diameter of about 25 mm. An intensity of about 10 10 slow positrons per second is expected in the primary positron beam. Outside of the reactor shield a W(1 0 0) single crystal remoderation stage will lead to an improvement of the positron beam brilliance before the positrons are guided to the experimental facilities.

  19. Camera-based driver assistance systems

    NASA Astrophysics Data System (ADS)

    Grimm, Michael

    2013-04-01

    In recent years, camera-based driver assistance systems have taken an important step: from laboratory setup to series production. This tutorial gives a brief overview on the technology behind driver assistance systems, presents the most significant functionalities and focuses on the processes of developing camera-based systems for series production. We highlight the critical points which need to be addressed when camera-based driver assistance systems are sold in their thousands, worldwide - and the benefit in terms of safety which results from it.

  20. Van de Graaff based positron source production

    NASA Astrophysics Data System (ADS)

    Lund, Kasey Roy

    The anti-matter counterpart to the electron, the positron, can be used for a myriad of different scientific research projects to include materials research, energy storage, and deep space flight propulsion. Currently there is a demand for large numbers of positrons to aid in these mentioned research projects. There are different methods of producing and harvesting positrons but all require radioactive sources or large facilities. Positron beams produced by relatively small accelerators are attractive because they are easily shut down, and small accelerators are readily available. A 4MV Van de Graaff accelerator was used to induce the nuclear reaction 12C(d,n)13N in order to produce an intense beam of positrons. 13N is an isotope of nitrogen that decays with a 10 minute half life into 13C, a positron, and an electron neutrino. This radioactive gas is frozen onto a cryogenic freezer where it is then channeled to form an antimatter beam. The beam is then guided using axial magnetic fields into a superconducting magnet with a field strength up to 7 Tesla where it will be stored in a newly designed Micro-Penning-Malmberg trap. Several source geometries have been experimented on and found that a maximum antimatter beam with a positron flux of greater than 0.55x10 6 e+s-1 was achieved. This beam was produced using a solid rare gas moderator composed of krypton. Due to geometric restrictions on this set up, only 0.1-1.0% of the antimatter was being frozen to the desired locations. Simulations and preliminary experiments suggest that a new geometry, currently under testing, will produce a beam of 107 e+s-1 or more.

  1. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  2. Plasma and trap-based techniques for science with positrons

    NASA Astrophysics Data System (ADS)

    Danielson, J. R.; Dubin, D. H. E.; Greaves, R. G.; Surko, C. M.

    2015-01-01

    In recent years, there has been a wealth of new science involving low-energy antimatter (i.e., positrons and antiprotons) at energies ranging from 102 to less than 10-3 eV . Much of this progress has been driven by the development of new plasma-based techniques to accumulate, manipulate, and deliver antiparticles for specific applications. This article focuses on the advances made in this area using positrons. However, many of the resulting techniques are relevant to antiprotons as well. An overview is presented of relevant theory of single-component plasmas in electromagnetic traps. Methods are described to produce intense sources of positrons and to efficiently slow the typically energetic particles thus produced. Techniques are described to trap positrons efficiently and to cool and compress the resulting positron gases and plasmas. Finally, the procedures developed to deliver tailored pulses and beams (e.g., in intense, short bursts, or as quasimonoenergetic continuous beams) for specific applications are reviewed. The status of development in specific application areas is also reviewed. One example is the formation of antihydrogen atoms for fundamental physics [e.g., tests of invariance under charge conjugation, parity inversion, and time reversal (the CPT theorem), and studies of the interaction of gravity with antimatter]. Other applications discussed include atomic and materials physics studies and the study of the electron-positron many-body system, including both classical electron-positron plasmas and the complementary quantum system in the form of Bose-condensed gases of positronium atoms. Areas of future promise are also discussed. The review concludes with a brief summary and a list of outstanding challenges.

  3. Multimodal sensing-based camera applications

    NASA Astrophysics Data System (ADS)

    Bordallo López, Miguel; Hannuksela, Jari; Silvén, J. Olli; Vehviläinen, Markku

    2011-02-01

    The increased sensing and computing capabilities of mobile devices can provide for enhanced mobile user experience. Integrating the data from different sensors offers a way to improve application performance in camera-based applications. A key advantage of using cameras as an input modality is that it enables recognizing the context. Therefore, computer vision has been traditionally utilized in user interfaces to observe and automatically detect the user actions. The imaging applications can also make use of various sensors for improving the interactivity and the robustness of the system. In this context, two applications fusing the sensor data with the results obtained from video analysis have been implemented on a Nokia Nseries mobile device. The first solution is a real-time user interface that can be used for browsing large images. The solution enables the display to be controlled by the motion of the user's hand using the built-in sensors as complementary information. The second application is a real-time panorama builder that uses the device's accelerometers to improve the overall quality, providing also instructions during the capture. The experiments show that fusing the sensor data improves camera-based applications especially when the conditions are not optimal for approaches using camera data alone.

  4. Recent progress in tailoring trap-based positron beams

    SciTech Connect

    Natisin, M. R.; Hurst, N. C.; Danielson, J. R.; Surko, C. M.

    2013-03-19

    Recent progress is described to implement two approaches to specially tailor trap-based positron beams. Experiments and simulations are presented to understand the limits on the energy spread and pulse duration of positron beams extracted from a Penning-Malmberg (PM) trap after the particles have been buffer-gas cooled (or heated) in the range of temperatures 1000 {>=} T {>=} 300 K. These simulations are also used to predict beam performance for cryogenically cooled positrons. Experiments and simulations are also presented to understand the properties of beams formed when plasmas are tailored in a PM trap in a 5 tesla magnetic field, then non-adiabatically extracted from the field using a specially designed high-permeability grid to create a new class of electrostatically guided beams.

  5. Camera calibration based on parallel lines

    NASA Astrophysics Data System (ADS)

    Li, Weimin; Zhang, Yuhai; Zhao, Yu

    2015-01-01

    Nowadays, computer vision has been wildly used in our daily life. In order to get some reliable information, camera calibration can not be neglected. Traditional camera calibration cannot be used in reality due to the fact that we cannot find the accurate coordinate information of the referenced control points. In this article, we present a camera calibration algorithm which can determine the intrinsic parameters both with the extrinsic parameters. The algorithm is based on the parallel lines in photos which can be commonly find in the real life photos. That is we can first get the intrinsic parameters as well as the extrinsic parameters though the information picked from the photos we take from the normal life. More detail, we use two pairs of the parallel lines to compute the vanishing points, specially if these parallel lines are perpendicular, which means these two vanishing points are conjugate with each other, we can use some views (at least 5 views) to determine the image of the absolute conic(IAC). Then, we can easily get the intrinsic parameters by doing cholesky factorization on the matrix of IAC.As we all know, when connect the vanishing point with the camera optical center, we can get a line which is parallel with the original lines in the scene plane. According to this, we can get the extrinsic parameters R and T. Both the simulation and the experiment results meets our expectations.

  6. The AOTF-based NO2 camera

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier

    2016-12-01

    The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the

  7. Conceptual design of an intense positron source based on an LIA

    NASA Astrophysics Data System (ADS)

    Long, Ji-Dong; Yang, Zhen; Dong, Pan; Shi, Jin-Shui

    2012-04-01

    Accelerator based positron sources are widely used due to their high intensity. Most of these accelerators are RF accelerators. An LIA (linear induction accelerator) is a kind of high current pulsed accelerator used for radiography. A conceptual design of an intense pulsed positron source based on an LIA is presented in the paper. One advantage of an LIA is its pulsed power being higher than conventional accelerators, which means a higher amount of primary electrons for positron generations per pulse. Another advantage of an LIA is that it is very suitable to decelerate the positron bunch generated by bremsstrahlung pair process due to its ability to adjustably shape the voltage pulse. By implementing LIA cavities to decelerate the positron bunch before it is moderated, the positron yield could be greatly increased. These features may make the LIA based positron source become a high intensity pulsed positron source.

  8. Interface circuit design and control system programming for an EMCCD camera based on Camera Link

    NASA Astrophysics Data System (ADS)

    Li, Bin-hua; Rao, Xiao-hui; Yan, Jia; Li, Da-lun; Zhang, Yi-gong

    2013-08-01

    This paper presents an appropriate solution for self-developed EMCCD cameras based on Camera Link. A new interface circuit used to connect an embedded processor Nios II to the serial communication port of Camera Link in the camera is designed, and a simplified structure diagram is shown. To implement functions of the circuit, in the hardware design, it is necessary to add a universal serial communication component to the Nios II when building the processor and its peripheral components in the Altera SOPC development environment. In the software design, we use C language to write a UART interrupt response routine for instructions and data receiving and transmitting, and a camera control program in the slave computer (Nios II), employ a Sapera LT development library and VC++ to write a serial communication routine, a camera control and image acquisition program in the host computer. The developed camera can be controlled by the host PC, the camera status can return to the PC, and a huge amount of image data can be uploaded at a high speed through a Camera Link cable. A flow chart of the serial communication and camera control program in Nios II is given, and two operating interfaces in the PC are shown. Some design and application skills are described in detail. The test results indicate that the interface circuit and the control programs that we have developed are feasible and reliable.

  9. A new scheme to accumulate positrons in a Penning-Malmberg trap with a linac-based positron pulsed source

    NASA Astrophysics Data System (ADS)

    Dupré, P.

    2013-03-01

    The Gravitational Behaviour of Antimatter at Rest experiment (GBAR) is designed to perform a direct measurement of the weak equivalence principle on antimatter by measuring the acceleration of anti-hydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium (Ps) cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. The Ps target will be produced by a pulse of few 1010 positrons injected onto a positron-positronium converter. For this purpose, a slow positron source using an electron Linac has been constructed at Saclay. The present flux is comparable with that of 22Na-based sources using solid neon moderator. A new positron accumulation scheme with a Penning-Malmberg trap has been proposed taking advantage of the pulsed time structure of the beam. In the trap, the positrons are cooled by interaction with a dense electron plasma. The overall trapping efficiency has been estimated to be ˜70% by numerical simulations.

  10. Design Study of Linear Accelerator-Based Positron Re-Emission Microscopy

    NASA Astrophysics Data System (ADS)

    Ogawa, Hiroshi; Kinomura, Atsushi; Oshima, Nagayasu; Suzuki, Ryoichi; O'Rourke, Brian E.

    In order to shorten the acquisition time of positron re-emission microscopy (PRM), a linear accelerator (LINAC)-based PRM system has been studied. The beam focusing system was designed to obtain a high brightness positron beam on the PRM sample. The beam size at the sample was calculated to be 0.8mm (FWHM), and the positron intensity within the field of view of the PRM was more than one order of magnitude higher in comparison with the previous studies.

  11. Video-Based Point Cloud Generation Using Multiple Action Cameras

    NASA Astrophysics Data System (ADS)

    Teo, T.

    2015-05-01

    Due to the development of action cameras, the use of video technology for collecting geo-spatial data becomes an important trend. The objective of this study is to compare the image-mode and video-mode of multiple action cameras for 3D point clouds generation. Frame images are acquired from discrete camera stations while videos are taken from continuous trajectories. The proposed method includes five major parts: (1) camera calibration, (2) video conversion and alignment, (3) orientation modelling, (4) dense matching, and (5) evaluation. As the action cameras usually have large FOV in wide viewing mode, camera calibration plays an important role to calibrate the effect of lens distortion before image matching. Once the camera has been calibrated, the author use these action cameras to take video in an indoor environment. The videos are further converted into multiple frame images based on the frame rates. In order to overcome the time synchronous issues in between videos from different viewpoints, an additional timer APP is used to determine the time shift factor between cameras in time alignment. A structure form motion (SfM) technique is utilized to obtain the image orientations. Then, semi-global matching (SGM) algorithm is adopted to obtain dense 3D point clouds. The preliminary results indicated that the 3D points from 4K video are similar to 12MP images, but the data acquisition performance of 4K video is more efficient than 12MP digital images.

  12. A Novel Camera Calibration Method Based on Polar Coordinate

    PubMed Central

    Gai, Shaoyan; Da, Feipeng; Fang, Xu

    2016-01-01

    A novel calibration method based on polar coordinate is proposed. The world coordinates are expressed in the form of polar coordinates, which are converted to world coordinates in the calibration process. In the beginning, the calibration points are obtained in polar coordinates. By transformation between polar coordinates and rectangular coordinates, the points turn into form of rectangular coordinates. Then, the points are matched with the corresponding image coordinates. At last, the parameters are obtained by objective function optimization. By the proposed method, the relationships between objects and cameras are expressed in polar coordinates easily. It is suitable for multi-camera calibration. Cameras can be calibrated with fewer points. The calibration images can be positioned according to the location of cameras. The experiment results demonstrate that the proposed method is an efficient calibration method. By the method, cameras are calibrated conveniently with high accuracy. PMID:27798651

  13. Spectral Camera based on Ghost Imaging via Sparsity Constraints

    PubMed Central

    Liu, Zhentao; Tan, Shiyu; Wu, Jianrong; Li, Enrong; Shen, Xia; Han, Shensheng

    2016-01-01

    The image information acquisition ability of a conventional camera is usually much lower than the Shannon Limit since it does not make use of the correlation between pixels of image data. Applying a random phase modulator to code the spectral images and combining with compressive sensing (CS) theory, a spectral camera based on true thermal light ghost imaging via sparsity constraints (GISC spectral camera) is proposed and demonstrated experimentally. GISC spectral camera can acquire the information at a rate significantly below the Nyquist rate, and the resolution of the cells in the three-dimensional (3D) spectral images data-cube can be achieved with a two-dimensional (2D) detector in a single exposure. For the first time, GISC spectral camera opens the way of approaching the Shannon Limit determined by Information Theory in optical imaging instruments. PMID:27180619

  14. New light field camera based on physical based rendering tracing

    NASA Astrophysics Data System (ADS)

    Chung, Ming-Han; Chang, Shan-Ching; Lee, Chih-Kung

    2014-03-01

    Even though light field technology was first invented more than 50 years ago, it did not gain popularity due to the limitation imposed by the computation technology. With the rapid advancement of computer technology over the last decade, the limitation has been uplifted and the light field technology quickly returns to the spotlight of the research stage. In this paper, PBRT (Physical Based Rendering Tracing) was introduced to overcome the limitation of using traditional optical simulation approach to study the light field camera technology. More specifically, traditional optical simulation approach can only present light energy distribution but typically lack the capability to present the pictures in realistic scenes. By using PBRT, which was developed to create virtual scenes, 4D light field information was obtained to conduct initial data analysis and calculation. This PBRT approach was also used to explore the light field data calculation potential in creating realistic photos. Furthermore, we integrated the optical experimental measurement results with PBRT in order to place the real measurement results into the virtually created scenes. In other words, our approach provided us with a way to establish a link of virtual scene with the real measurement results. Several images developed based on the above-mentioned approaches were analyzed and discussed to verify the pros and cons of the newly developed PBRT based light field camera technology. It will be shown that this newly developed light field camera approach can circumvent the loss of spatial resolution associated with adopting a micro-lens array in front of the image sensors. Detailed operational constraint, performance metrics, computation resources needed, etc. associated with this newly developed light field camera technique were presented in detail.

  15. An Undulator Based Polarized Positron Source for CLIC

    SciTech Connect

    Liu, Wanming; Gai, Wei; Rinolfi, Louis; Sheppard, John; /SLAC

    2012-07-02

    A viable positron source scheme is proposed that uses circularly polarized gamma rays generated from the main 250 GeV electron beam. The beam passes through a helical superconducting undulator with a magnetic field of {approx} 1 Tesla and a period of 1.15 cm. The gamma-rays produced in the undulator in the energy range between {approx} 3 MeV - 100 MeV will be directed to a titanium target and produce polarized positrons. The positrons are then captured, accelerated and transported to a Pre-Damping Ring (PDR). Detailed parameter studies of this scheme including positron yield, and undulator parameter dependence are presented. Effects on the 250 GeV CLIC main beam, including emittance growth and energy loss from the beam passing through the undulator are also discussed.

  16. Study of Trade Wind Clouds Using Ground Based Stereo Cameras

    NASA Astrophysics Data System (ADS)

    Porter, J.

    2010-12-01

    We employ ground based stereo cameras to derive the three dimensional position of trade wind clouds features. The process employs both traditional and novel methods. The stereo cameras are calibrated for orientation using the sun as a geo-reference point at several times throughout the day. Spatial correlation is used to detect similar cloud features in both camera images and a simultaneous-differential equation is solved to get the best cloud position for the given rays from the cameras to the cloud feature. Once the positions of the clouds are known in three-dimensional space, then it is also possible to derive upper level wind speed and direction by tracking the position of clouds in space and time. The vector winds can be obtained at many locations and heights in a cone region over the surface site. The accuracy of the measurement depends on the camera separation with a trade-off occurring at different camera separations and cloud ranges. The system design and performance will be discussed along with field observations. This approach provides a new way to study clouds for climate change efforts. It also provides an inexpensive way to measure upper level wind fields in cloudy regions. Ground based stereo cameras are used to derive cloud position in space a time.

  17. Compact and robust hyperspectral camera based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Žídek, K.; Denk, O.; Hlubuček, J.; Václavík, J.

    2016-11-01

    Spectrum of light which is emitted or reflected by an object carries immense amount of information about the object. A simple piece of evidence is the importance of color sensing for human vision. Combining an image acquisition with efficient measurement of light spectra for each detected pixel is therefore one of the important issues in imaging, referred as hyperspectral imaging. We demonstrate a construction of a compact and robust hyperspectral camera for the visible and near-IR spectral region. The camera was designed vastly based on off-shelf optics, yet an extensive optimization and addition of three customized parts enabled construction of the camera featuring a low f-number (F/3.9) and fully concentric optics. We employ a novel approach of compressed sensing (namely coded aperture snapshot spectral imaging, abbrev. CASSI). The compressed sensing enables to computationally extract an encoded hyperspectral information from a single camera exposition. Owing to the technique the camera lacks any moving or scanning part, while it can record the full image and spectral information in a single snapshot. Moreover, unlike the commonly used compressed sensing table-top apparatuses, the camera represents a portable device able to work outside a lab. We demonstrate the spectro-temporal reconstruction of recorded scenes based on 90×90 random matrix encoding. Finally, we discuss potential of the compressed sensing in hyperspectral camera.

  18. Global Calibration of Multiple Cameras Based on Sphere Targets

    PubMed Central

    Sun, Junhua; He, Huabin; Zeng, Debing

    2016-01-01

    Global calibration methods for multi-camera system are critical to the accuracy of vision measurement. Proposed in this paper is such a method based on several groups of sphere targets and a precision auxiliary camera. Each camera to be calibrated observes a group of spheres (at least three), while the auxiliary camera observes all the spheres. The global calibration can be achieved after each camera reconstructs the sphere centers in its field of view. In the process of reconstructing a sphere center, a parameter equation is used to describe the sphere projection model. Theoretical analysis and computer simulation are carried out to analyze the factors that affect the calibration accuracy. Simulation results show that the parameter equation can largely improve the reconstruction accuracy. In the experiments, a two-camera system calibrated by our method is used to measure a distance about 578 mm, and the root mean squared error is within 0.14 mm. Furthermore, the experiments indicate that the method has simple operation and good flexibility, especially for the onsite multiple cameras without common field of view. PMID:26761007

  19. Fuzzy-rule-based image reconstruction for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Mondal, Partha P.; Rajan, K.

    2005-09-01

    Positron emission tomography (PET) and single-photon emission computed tomography have revolutionized the field of medicine and biology. Penalized iterative algorithms based on maximum a posteriori (MAP) estimation eliminate noisy artifacts by utilizing available prior information in the reconstruction process but often result in a blurring effect. MAP-based algorithms fail to determine the density class in the reconstructed image and hence penalize the pixels irrespective of the density class. Reconstruction with better edge information is often difficult because prior knowledge is not taken into account. The recently introduced median-root-prior (MRP)-based algorithm preserves the edges, but a steplike streaking effect is observed in the reconstructed image, which is undesirable. A fuzzy approach is proposed for modeling the nature of interpixel interaction in order to build an artifact-free edge-preserving reconstruction. The proposed algorithm consists of two elementary steps: (1) edge detection, in which fuzzy-rule-based derivatives are used for the detection of edges in the nearest neighborhood window (which is equivalent to recognizing nearby density classes), and (2) fuzzy smoothing, in which penalization is performed only for those pixels for which no edge is detected in the nearest neighborhood. Both of these operations are carried out iteratively until the image converges. Analysis shows that the proposed fuzzy-rule-based reconstruction algorithm is capable of producing qualitatively better reconstructed images than those reconstructed by MAP and MRP algorithms. The reconstructed images are sharper, with small features being better resolved owing to the nature of the fuzzy potential function.

  20. FPGA-Based Pulse Parameter Discovery for Positron Emission Tomography.

    PubMed

    Haselman, Michael; Hauck, Scott; Lewellen, Thomas K; Miyaoka, Robert S

    2009-10-24

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex digital signal processing algorithms with clock rates well above 100MHz. This, combined with FPGA's low expense and ease of use make them an ideal technology for a data acquisition system for a positron emission tomography (PET) scanner. The University of Washington is producing a series of high-resolution, small-animal PET scanners that utilize FPGAs as the core of the front-end electronics. For these next generation scanners, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper we report how we utilize the reconfigurable property of an FPGA to self-calibrate itself to determine pulse parameters necessary for some of the pulse processing steps. Specifically, we show how the FPGA can generate a reference pulse based on actual pulse data instead of a model. We also report how other properties of the photodetector pulse (baseline, pulse length, average pulse energy and event triggers) can be determined automatically by the FPGA.

  1. A cooperative control algorithm for camera based observational systems.

    SciTech Connect

    Young, Joseph G.

    2012-01-01

    Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.

  2. Performance of the (n,{gamma})-Based Positron Beam Facility NEPOMUC

    SciTech Connect

    Schreckenbach, K.; Hugenschmidt, C.; Piochacz, C.; Stadlbauer, M.; Loewe, B.; Maier, J.; Pikart, P.

    2009-01-28

    The in-pile positron source of NEPOMUC at the neutron source Heinz Maier-Leibnitz (FRM II) provides at the experimental site an intense beam of monoenergetic positrons with selectable energy between 15 eV and 3 keV. The principle of the source is based on neutron capture gamma rays produced by cadmium in a beam tube tip close to the reactor core. The gamma ray absorption in platinum produces positrons which are moderated and formed to the beam. An unprecedented beam intensity of 9.10{sup 8} e{sup +}/s is achieved (1 keV). The performance and applications of the facility are presented.

  3. Observation of Polarized Positrons from an Undulator-Based Source

    SciTech Connect

    Alexander, G; Barley, J.; Batygin, Y.; Berridge, S.; Bharadwaj, V.; Bower, G.; Bugg, W.; Decker, F.-J.; Dollan, R.; Efremenko, Y.; Gharibyan, V.; Hast, C.; Iverson, R.; Kolanoski, H.; Kovermann, J.; Laihem, K.; Lohse, T.; McDonald, K.T.; Mikhailichenko, A.A.; Moortgat-Pick, G.A.; Pahl, P.; /Tel Aviv U. /Cornell U., Phys. Dept. /SLAC /Tennessee U. /Humboldt U., Berlin /DESY /Yerevan Phys. Inst. /Aachen, Tech. Hochsch. /DESY, Zeuthen /Princeton U. /Durham U. /Daresbury

    2008-03-06

    An experiment (E166) at the Stanford Linear Accelerator Center (SLAC) has demonstrated a scheme in which a multi-GeV electron beam passed through a helical undulator to generate multi-MeV, circularly polarized photons which were then converted in a thin target to produce positrons (and electrons) with longitudinal polarization above 80% at 6 MeV. The results are in agreement with Geant4 simulations that include the dominant polarization-dependent interactions of electrons, positrons and photons in matter.

  4. Defects in nitride-based semiconductors probed by positron annihilation

    NASA Astrophysics Data System (ADS)

    Uedono, A.; Sumiya, M.; Ishibashi, S.; Oshima, N.; Suzuki, R.

    2014-04-01

    Point defects in InxGa1-xN grown by metal organic chemical vapor deposition were studied by a monoenergetic positron beam. Measurements of Doppler broadening spectra of the annihilation radiation as a function of incident positron energy for InxGa1-xN (x = 0.08 and 0.14) showed that vacancy-type defects were introduced with increasing InN composition. From comparisons between coincidence Doppler broadening spectra and the results calculated using the projector augmented-wave method, the major defect species was identified as the complexes between a cation vacancy and nitride vacancies. The concentration of the defects was found to be suppressed by Mg doping. An effect of Mg-doping on the positron diffusion properties in GaN and InN was also discussed. The momentum distribution of electrons at the InxGa1-xN/GaN interface was close to that in defect-free GaN or InxGa1-xN, which was attributed to the localization of positrons at the interface due to the electric field caused by polarizations.

  5. Design of microcontroller based system for automation of streak camera

    SciTech Connect

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.; Sharma, M. L.; Navathe, C. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  6. Design of microcontroller based system for automation of streak camera.

    PubMed

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  7. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  8. Extrinsic Calibration of Camera Networks Based on Pedestrians.

    PubMed

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-05-09

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life.

  9. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  10. A Robust Camera-Based Interface for Mobile Entertainment.

    PubMed

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-02-19

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user's head by processing the frames provided by the mobile device's front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device's orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user's perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people.

  11. Analysis of unstructured video based on camera motion

    NASA Astrophysics Data System (ADS)

    Abdollahian, Golnaz; Delp, Edward J.

    2007-01-01

    Although considerable work has been done in management of "structured" video such as movies, sports, and television programs that has known scene structures, "unstructured" video analysis is still a challenging problem due to its unrestricted nature. The purpose of this paper is to address issues in the analysis of unstructured video and in particular video shot by a typical unprofessional user (i.e home video). We describe how one can make use of camera motion information for unstructured video analysis. A new concept, "camera viewing direction," is introduced as the building block of home video analysis. Motion displacement vectors are employed to temporally segment the video based on this concept. We then find the correspondence between the camera behavior with respect to the subjective importance of the information in each segment and describe how different patterns in the camera motion can indicate levels of interest in a particular object or scene. By extracting these patterns, the most representative frames, keyframes, for the scenes are determined and aggregated to summarize the video sequence.

  12. EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    EAST FACE OF REACTOR BASE. COMING TOWARD CAMERA IS EXCAVATION FOR MTR CANAL. CAISSONS FLANK EACH SIDE. COUNTERFORT (SUPPORT PERPENDICULAR TO WHAT WILL BE THE LONG WALL OF THE CANAL) RESTS ATOP LEFT CAISSON. IN LOWER PART OF VIEW, DRILLERS PREPARE TRENCHES FOR SUPPORT BEAMS THAT WILL LIE BENEATH CANAL FLOOR. INL NEGATIVE NO. 739. Unknown Photographer, 10/6/1950 - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  13. Formation of buffer-gas-trap based positron beams

    SciTech Connect

    Natisin, M. R. Danielson, J. R. Surko, C. M.

    2015-03-15

    Presented here are experimental measurements, analytic expressions, and simulation results for pulsed, magnetically guided positron beams formed using a Penning-Malmberg style buffer gas trap. In the relevant limit, particle motion can be separated into motion along the magnetic field and gyro-motion in the plane perpendicular to the field. Analytic expressions are developed which describe the evolution of the beam energy distributions, both parallel and perpendicular to the magnetic field, as the beam propagates through regions of varying magnetic field. Simulations of the beam formation process are presented, with the parameters chosen to accurately replicate experimental conditions. The initial conditions and ejection parameters are varied systematically in both experiment and simulation, allowing the relevant processes involved in beam formation to be explored. These studies provide new insights into the underlying physics, including significant adiabatic cooling, due to the time-dependent beam-formation potential. Methods to improve the beam energy and temporal resolution are discussed.

  14. Development of mini linac-based positron source and an efficient positronium convertor for positively charged antihydrogen production

    NASA Astrophysics Data System (ADS)

    Muranaka, T.; Debu, P.; Dupré, P.; Liszkay, L.; Mansoulie, B.; Pérez, P.; Rey, J. M.; Ruiz, N.; Sacquin, Y.; Crivelli, P.; Gendotti, U.; Rubbia, A.

    2010-04-01

    We have installed in Saclay a facility for an intense positron source in November 2008. It is based on a compact 5.5 MeV electron linac connected to a reaction chamber with a tungsten target inside to produce positrons via pair production. The expected production rate for fast positrons is 5·1011 per second. The study of moderation of fast positrons and the construction of a slow positron trap are underway. In parallel, we have investigated an efficient positron-positronium convertor using porous silica materials. These studies are parts of a project to produce positively charged antihydrogen ions aiming to demonstrate the feasibility of a free fall antigravity measurement of neutral antihydrogen.

  15. A trap-based pulsed positron beam optimised for positronium laser spectroscopy

    SciTech Connect

    Cooper, B. S. Alonso, A. M.; Deller, A.; Wall, T. E.; Cassidy, D. B.

    2015-10-15

    We describe a pulsed positron beam that is optimised for positronium (Ps) laser-spectroscopy experiments. The system is based on a two-stage Surko-type buffer gas trap that produces 4 ns wide pulses containing up to 5 × 10{sup 5} positrons at a rate of 0.5-10 Hz. By implanting positrons from the trap into a suitable target material, a dilute positronium gas with an initial density of the order of 10{sup 7} cm{sup −3} is created in vacuum. This is then probed with pulsed (ns) laser systems, where various Ps-laser interactions have been observed via changes in Ps annihilation rates using a fast gamma ray detector. We demonstrate the capabilities of the apparatus and detection methodology via the observation of Rydberg positronium atoms with principal quantum numbers ranging from 11 to 22 and the Stark broadening of the n = 2 → 11 transition in electric fields.

  16. 78 FR 68475 - Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... COMMISSION Certain Vision-Based Driver Assistance System Cameras and Components Thereof; Institution of...-based driver assistance system cameras and components thereof by reason of infringement of certain... assistance system cameras and components thereof by reason of infringement of one or more of claims 1, 2,...

  17. Visual homing with a pan-tilt based stereo camera

    NASA Astrophysics Data System (ADS)

    Nirmal, Paramesh; Lyons, Damian M.

    2013-01-01

    Visual homing is a navigation method based on comparing a stored image of the goal location and the current image (current view) to determine how to navigate to the goal location. It is theorized that insects, such as ants and bees, employ visual homing methods to return to their nest. Visual homing has been applied to autonomous robot platforms using two main approaches: holistic and feature-based. Both methods aim at determining distance and direction to the goal location. Navigational algorithms using Scale Invariant Feature Transforms (SIFT) have gained great popularity in the recent years due to the robustness of the feature operator. Churchill and Vardy have developed a visual homing method using scale change information (Homing in Scale Space, HiSS) from SIFT. HiSS uses SIFT feature scale change information to determine distance between the robot and the goal location. Since the scale component is discrete with a small range of values, the result is a rough measurement with limited accuracy. We have developed a method that uses stereo data, resulting in better homing performance. Our approach utilizes a pan-tilt based stereo camera, which is used to build composite wide-field images. We use the wide-field images combined with stereo-data obtained from the stereo camera to extend the keypoint vector described in to include a new parameter, depth (z). Using this info, our algorithm determines the distance and orientation from the robot to the goal location. We compare our method with HiSS in a set of indoor trials using a Pioneer 3-AT robot equipped with a BumbleBee2 stereo camera. We evaluate the performance of both methods using a set of performance measures described in this paper.

  18. Estimation of Cometary Rotation Parameters Based on Camera Images

    NASA Technical Reports Server (NTRS)

    Spindler, Karlheinz

    2007-01-01

    The purpose of the Rosetta mission is the in situ analysis of a cometary nucleus using both remote sensing equipment and scientific instruments delivered to the comet surface by a lander and transmitting measurement data to the comet-orbiting probe. Following a tour of planets including one Mars swing-by and three Earth swing-bys, the Rosetta probe is scheduled to rendezvous with comet 67P/Churyumov-Gerasimenko in May 2014. The mission poses various flight dynamics challenges, both in terms of parameter estimation and maneuver planning. Along with spacecraft parameters, the comet's position, velocity, attitude, angular velocity, inertia tensor and gravitatonal field need to be estimated. The measurements on which the estimation process is based are ground-based measurements (range and Doppler) yielding information on the heliocentric spacecraft state and images taken by an on-board camera yielding informaton on the comet state relative to the spacecraft. The image-based navigation depends on te identification of cometary landmarks (whose body coordinates also need to be estimated in the process). The paper will describe the estimation process involved, focusing on the phase when, after orbit insertion, the task arises to estimate the cometary rotational motion from camera images on which individual landmarks begin to become identifiable.

  19. Linac-based positron source and generation of a high density positronium cloud for the GBAR experiment

    NASA Astrophysics Data System (ADS)

    Liszkay, L.; Comini, P.; Corbel, C.; Debu, P.; Dupré, P.; Grandemange, P.; Pérez, P.; Rey, J.-M.; Ruiz, N.; Sacquin, Y.

    2013-06-01

    The aim of the recently approved GBAR (Gravitational Behaviour of Antihydrogen at Rest) experiment is to measure the acceleration of neutral antihydrogen atoms in the gravitational field of the Earth. The experimental scheme requires a high density positronium cloud as a target for antiprotons, provided by the Antiproton Decelerator (AD) - Extra Low Energy Antiproton Ring (ELENA) facility at CERN. We introduce briefly the experimental scheme and present the ongoing efforts at IRFU CEA Saclay to develop the positron source and the positron-positronium converter, which are key parts of the experiment. We have constructed a slow positron source in Saclay, based on a low energy (4.3 MeV) linear electron accelerator (linac). By using an electron target made of tungsten and a stack of thin W meshes as positron moderator, we reached a slow positron intensity that is comparable with that of 22Na-based sources using a solid neon moderator. The source feeds positrons into a high field (5 T) Penning-Malmberg trap. Intense positron pulses from the trap will be converted to slow ortho-positronium (o-Ps) by a converter structure. Mesoporous silica films appear to date to be the best candidates as converter material. We discuss our studies to find the optimal pore configuration for the positron-positronium converter.

  20. Noninvasive particle sizing using camera-based diffuse reflectance spectroscopy.

    PubMed

    Abildgaard, Otto Højager Attermann; Frisvad, Jeppe Revall; Falster, Viggo; Parker, Alan; Christensen, Niels Jørgen; Dahl, Anders Bjorholm; Larsen, Rasmus

    2016-05-10

    Diffuse reflectance measurements are useful for noninvasive inspection of optical properties such as reduced scattering and absorption coefficients. Spectroscopic analysis of these optical properties can be used for particle sizing. Systems based on optical fiber probes are commonly employed, but their low spatial resolution limits their validity ranges for the coefficients. To cover a wider range of coefficients, we use camera-based spectroscopic oblique incidence reflectometry. We develop a noninvasive technique for acquisition of apparent particle size distributions based on this approach. Our technique is validated using stable oil-in-water emulsions with a wide range of known particle size distributions. We also measure the apparent particle size distributions of complex dairy products. These results show that our tool, in contrast to those based on fiber probes, can deal with a range of optical properties wide enough to track apparent particle size distributions in a typical industrial process.

  1. Camera-based forecasting of insolation for solar systems

    NASA Astrophysics Data System (ADS)

    Manger, Daniel; Pagel, Frank

    2015-02-01

    With the transition towards renewable energies, electricity suppliers are faced with huge challenges. Especially the increasing integration of solar power systems into the grid gets more and more complicated because of their dynamic feed-in capacity. To assist the stabilization of the grid, the feed-in capacity of a solar power system within the next hours, minutes and even seconds should be known in advance. In this work, we present a consumer camera-based system for forecasting the feed-in capacity of a solar system for a horizon of 10 seconds. A camera is targeted at the sky and clouds are segmented, detected and tracked. A quantitative prediction of the insolation is performed based on the tracked clouds. Image data as well as truth data for the feed-in capacity was synchronously collected at one Hz using a small solar panel, a resistor and a measuring device. Preliminary results demonstrate both the applicability and the limits of the proposed system.

  2. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  3. Evolutionary Fuzzy Block-Matching-Based Camera Raw Image Denoising.

    PubMed

    Yang, Chin-Chang; Guo, Shu-Mei; Tsai, Jason Sheng-Hong

    2016-10-03

    An evolutionary fuzzy block-matching-based image denoising algorithm is proposed to remove noise from a camera raw image. Recently, a variance stabilization transform is widely used to stabilize the noise variance, so that a Gaussian denoising algorithm can be used to remove the signal-dependent noise in camera sensors. However, in the stabilized domain, the existed denoising algorithm may blur too much detail. To provide a better estimate of the noise-free signal, a new block-matching approach is proposed to find similar blocks by the use of a type-2 fuzzy logic system (FLS). Then, these similar blocks are averaged with the weightings which are determined by the FLS. Finally, an efficient differential evolution is used to further improve the performance of the proposed denoising algorithm. The experimental results show that the proposed denoising algorithm effectively improves the performance of image denoising. Furthermore, the average performance of the proposed method is better than those of two state-of-the-art image denoising algorithms in subjective and objective measures.

  4. Whole blood glucose analysis based on smartphone camera module

    NASA Astrophysics Data System (ADS)

    Devadhasan, Jasmine Pramila; Oh, Hyunhee; Choi, Cheol Soo; Kim, Sanghyo

    2015-11-01

    Complementary metal oxide semiconductor (CMOS) image sensors have received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) chip was developed to carry out the enzyme kinetic reaction at various concentrations (110-586 mg/dL) of mouse blood glucose. In this technique, assay reagent is immobilized onto amine functionalized silica (AFSiO2) nanoparticles as an electrostatic attraction in order to achieve glucose oxidation on the chip. The assay reagent immobilized AFSiO2 nanoparticles develop a semi-transparent reaction platform, which is technically a suitable chip to analyze by a camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique; the photon number decreases when the glucose concentration increases. The combination of these components, the CMOS image sensor and enzyme immobilized PET film chip, constitute a compact, accurate, inexpensive, precise, digital, highly sensitive, specific, and optical glucose-sensing approach for POC diagnosis.

  5. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  6. Securing quality of camera-based biomedical optics

    NASA Astrophysics Data System (ADS)

    Guse, Frank; Kasper, Axel; Zinter, Bob

    2009-02-01

    As sophisticated optical imaging technologies move into clinical applications, manufacturers need to guarantee their products meet required performance criteria over long lifetimes and in very different environmental conditions. A consistent quality management marks critical components features derived from end-users requirements in a top-down approach. Careful risk analysis in the design phase defines the sample sizes for production tests, whereas first article inspection assures the reliability of the production processes. We demonstrate the application of these basic quality principles to camera-based biomedical optics for a variety of examples including molecular diagnostics, dental imaging, ophthalmology and digital radiography, covering a wide range of CCD/CMOS chip sizes and resolutions. Novel concepts in fluorescence detection and structured illumination are also highlighted.

  7. Cardiac cameras.

    PubMed

    Travin, Mark I

    2011-05-01

    Cardiac imaging with radiotracers plays an important role in patient evaluation, and the development of suitable imaging instruments has been crucial. While initially performed with the rectilinear scanner that slowly transmitted, in a row-by-row fashion, cardiac count distributions onto various printing media, the Anger scintillation camera allowed electronic determination of tracer energies and of the distribution of radioactive counts in 2D space. Increased sophistication of cardiac cameras and development of powerful computers to analyze, display, and quantify data has been essential to making radionuclide cardiac imaging a key component of the cardiac work-up. Newer processing algorithms and solid state cameras, fundamentally different from the Anger camera, show promise to provide higher counting efficiency and resolution, leading to better image quality, more patient comfort and potentially lower radiation exposure. While the focus has been on myocardial perfusion imaging with single-photon emission computed tomography, increased use of positron emission tomography is broadening the field to include molecular imaging of the myocardium and of the coronary vasculature. Further advances may require integrating cardiac nuclear cameras with other imaging devices, ie, hybrid imaging cameras. The goal is to image the heart and its physiological processes as accurately as possible, to prevent and cure disease processes.

  8. Design of high speed camera based on CMOS technology

    NASA Astrophysics Data System (ADS)

    Park, Sei-Hun; An, Jun-Sick; Oh, Tae-Seok; Kim, Il-Hwan

    2007-12-01

    The capacity of a high speed camera in taking high speed images has been evaluated using CMOS image sensors. There are 2 types of image sensors, namely, CCD and CMOS sensors. CMOS sensor consumes less power than CCD sensor and can take images more rapidly. High speed camera with built-in CMOS sensor is widely used in vehicle crash tests and airbag controls, golf training aids, and in bullet direction measurement in the military. The High Speed Camera System made in this study has the following components: CMOS image sensor that can take about 500 frames per second at a resolution of 1280*1024; FPGA and DDR2 memory that control the image sensor and save images; Camera Link Module that transmits saved data to PC; and RS-422 communication function that enables control of the camera from a PC.

  9. MO-AB-206-02: Testing Gamma Cameras Based On TG177 WG Report.

    PubMed

    Halama, J

    2016-06-01

    This education session will cover the physics and operation principles of gamma cameras and PET scanners. The first talk will focus on PET imaging. An overview of the principles of PET imaging will be provided, including positron decay physics, and the transition from 2D to 3D imaging. More recent advances in hardware and software will be discussed, such as time-of-flight imaging, and improvements in reconstruction algorithms that provide for options such as depth-of-interaction corrections. Quantitative applications of PET will be discussed, as well as the requirements for doing accurate quantitation. Relevant performance tests will also be described.

  10. Microstructure Evaluation of Fe-BASED Amorphous Alloys Investigated by Doppler Broadening Positron Annihilation Technique

    NASA Astrophysics Data System (ADS)

    Lu, Wei; Huang, Ping; Wang, Yuxin; Yan, Biao

    2013-07-01

    Microstructure of Fe-based amorphous and nanocrystalline soft magnetic alloy has been investigated by X-ray diffraction (XRD), transmission electronic microscopy (TEM) and Doppler broadening positron annihilation technique (PAT). Doppler broadening measurement reveals that amorphous alloys (Finemet, Type I) which can form a nanocrystalline phase have more defects (free volume) than alloys (Metglas, Type II) which cannot form this microstructure. XRD and TEM characterization indicates that the nanocrystallization of amorphous Finemet alloy occurs at 460°C, where nanocrystallites of α-Fe with an average grain size of a few nanometers are formed in an amorphous matrix. With increasing annealing temperature up to 500°C, the average grain size increases up to around 12 nm. During the annealing of Finemet alloy, it has been demonstrated that positron annihilates in quenched-in defect, crystalline nanophase and amorphous-nanocrystalline interfaces. The change of line shape parameter S with annealing temperature in Finemet alloy is mainly due to the structural relaxation, the pre-nucleation of Cu nucleus and the nanocrystallization of α-Fe(Si) phase during annealing. This study throws new insights into positron behavior in the nanocrystallization of metallic glasses, especially in the presence of single or multiple nanophases embedded in the amorphous matrix.

  11. Camera calibration method of binocular stereo vision based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhong, Wanzhen; Dong, Xiaona

    2015-10-01

    Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.

  12. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber

    PubMed Central

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D.; Simons, Rainee N.; Xiao, John Q.

    2017-01-01

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer. PMID:28071734

  13. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber

    NASA Astrophysics Data System (ADS)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D.; Simons, Rainee N.; Xiao, John Q.

    2017-01-01

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer.

  14. A subwavelength resolution microwave/6.3 GHz camera based on a metamaterial absorber.

    PubMed

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeffrey D; Simons, Rainee N; Xiao, John Q

    2017-01-10

    The design, fabrication and characterization of a novel metamaterial absorber based camera with subwavelength spatial resolution are investigated. The proposed camera is featured with simple and lightweight design, easy portability, low cost, high resolution and sensitivity, and minimal image interference or distortion to the original field distribution. The imaging capability of the proposed camera was characterized in both near field and far field ranges. The experimental and simulated near field images both reveal that the camera produces qualitatively accurate images with negligible distortion to the original field distribution. The far field demonstration was done by coupling the designed camera with a microwave convex lens. The far field results further demonstrate that the camera can capture quantitatively accurate electromagnetic wave distribution in the diffraction limit. The proposed camera can be used in application such as non-destructive image and beam direction tracer.

  15. Prism-based single-camera system for stereo display

    NASA Astrophysics Data System (ADS)

    Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa

    2016-06-01

    This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.

  16. An autonomous sensor module based on a legacy CCTV camera

    NASA Astrophysics Data System (ADS)

    Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.

    2016-10-01

    A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.

  17. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer

    PubMed Central

    Shen, Bailey Y.

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  18. A Portable, Inexpensive, Nonmydriatic Fundus Camera Based on the Raspberry Pi® Computer.

    PubMed

    Shen, Bailey Y; Mukai, Shizuo

    2017-01-01

    Purpose. Nonmydriatic fundus cameras allow retinal photography without pharmacologic dilation of the pupil. However, currently available nonmydriatic fundus cameras are bulky, not portable, and expensive. Taking advantage of recent advances in mobile technology, we sought to create a nonmydriatic fundus camera that was affordable and could be carried in a white coat pocket. Methods. We built a point-and-shoot prototype camera using a Raspberry Pi computer, an infrared-sensitive camera board, a dual infrared and white light light-emitting diode, a battery, a 5-inch touchscreen liquid crystal display, and a disposable 20-diopter condensing lens. Our prototype camera was based on indirect ophthalmoscopy with both infrared and white lights. Results. The prototype camera measured 133mm × 91mm × 45mm and weighed 386 grams. The total cost of the components, including the disposable lens, was $185.20. The camera was able to obtain good-quality fundus images without pharmacologic dilation of the pupils. Conclusion. A fully functional, inexpensive, handheld, nonmydriatic fundus camera can be easily assembled from a relatively small number of components. With modest improvements, such a camera could be useful for a variety of healthcare professionals, particularly those who work in settings where a traditional table-mounted nonmydriatic fundus camera would be inconvenient.

  19. The E166 experiment: Development of an Undulator-Based Polarized Positron Source for the International Linear Collider

    SciTech Connect

    Kovermann, J.; Stahl, A.; Mikhailichenko, A.A.; Scott, D.; Moortgat-Pick, G.A.; Gharibyan, V.; Pahl, P.; Poschl, R.; Schuler, K.P.; Laihem, K.; Riemann, S.; Schalicke, A.; Dollan, R.; Kolanoski, H.; Lohse, T.; Schweizer, T.; McDonald, K.T.; Batygin, Y.; Bharadwaj, V.; Bower, G.; Decker, F.J.; /SLAC /Tel Aviv U. /Tennessee U.

    2011-11-14

    A longitudinal polarized positron beam is foreseen for the international linear collider (ILC). A proof-of-principle experiment has been performed in the final focus test beam at SLAC to demonstrate the production of polarized positrons for implementation at the ILC. The E166 experiment uses a 1 m long helical undulator in a 46.6 GeV electron beam to produce a few MeV photons with a high degree of circular polarization. These photons are then converted in a thin target to generate longitudinally polarized e{sup +} and e{sup -}. The positron polarization is measured using a Compton transmission polarimeter. The data analysis has shown asymmetries in the expected vicinity of 3.4% and {approx}1% for photons and positrons respectively and the expected positron longitudinal polarization is covering a range from 50% to 90%. The full exploitation of the physics potential of an international linear collider (ILC) will require the development of polarized positron beams. Having both e{sup +} and e{sup -} beams polarized will provide new insight into structures of couplings and thus give access to physics beyond the standard model [1]. The concept for a polarized positron source is based on circularly polarized photon sources. These photons are then converted to longitudinally polarized e{sup +} and e{sup -} pairs. While in an experiment at KEK [1a], Compton backscattering is used [2], the E166 experiment uses a helical undulator to produce polarized photons. An undulator-based positron source for the ILC has been proposed in [3,4]. The proposed scheme for an ILC positron source is illustrated in figure 1. In this scheme, a 150 GeV electron beam passes through a 120 m long helical undulator to produce an intense photon beam with a high degree of circular polarization. These photons are converted in a thin target to e{sup +} e{sup -} pairs. The polarized positrons are then collected, pre-accelerated to the damping ring and injected to the main linac. The E166 experiment is

  20. Positron microscopy

    SciTech Connect

    Hulett, L.D. Jr.; Xu, J.

    1995-02-01

    The negative work function property that some materials have for positrons make possible the development of positron reemission microscopy (PRM). Because of the low energies with which the positrons are emitted, some unique applications, such as the imaging of defects, can be made. The history of the concept of PRM, and its present state of development will be reviewed. The potential of positron microprobe techniques will be discussed also.

  1. A bionic camera-based polarization navigation sensor.

    PubMed

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-07-21

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode.

  2. A hemispherical electronic eye camera based on compressible silicon optoelectronics.

    PubMed

    Ko, Heung Cho; Stoykovich, Mark P; Song, Jizhou; Malyarchuk, Viktor; Choi, Won Mook; Yu, Chang-Jae; Geddes, Joseph B; Xiao, Jianliang; Wang, Shuodao; Huang, Yonggang; Rogers, John A

    2008-08-07

    The human eye is a remarkable imaging device, with many attractive design features. Prominent among these is a hemispherical detector geometry, similar to that found in many other biological systems, that enables a wide field of view and low aberrations with simple, few-component imaging optics. This type of configuration is extremely difficult to achieve using established optoelectronics technologies, owing to the intrinsically planar nature of the patterning, deposition, etching, materials growth and doping methods that exist for fabricating such systems. Here we report strategies that avoid these limitations, and implement them to yield high-performance, hemispherical electronic eye cameras based on single-crystalline silicon. The approach uses wafer-scale optoelectronics formed in unusual, two-dimensionally compressible configurations and elastomeric transfer elements capable of transforming the planar layouts in which the systems are initially fabricated into hemispherical geometries for their final implementation. In a general sense, these methods, taken together with our theoretical analyses of their associated mechanics, provide practical routes for integrating well-developed planar device technologies onto the surfaces of complex curvilinear objects, suitable for diverse applications that cannot be addressed by conventional means.

  3. A Bionic Camera-Based Polarization Navigation Sensor

    PubMed Central

    Wang, Daobin; Liang, Huawei; Zhu, Hui; Zhang, Shuai

    2014-01-01

    Navigation and positioning technology is closely related to our routine life activities, from travel to aerospace. Recently it has been found that Cataglyphis (a kind of desert ant) is able to detect the polarization direction of skylight and navigate according to this information. This paper presents a real-time bionic camera-based polarization navigation sensor. This sensor has two work modes: one is a single-point measurement mode and the other is a multi-point measurement mode. An indoor calibration experiment of the sensor has been done under a beam of standard polarized light. The experiment results show that after noise reduction the accuracy of the sensor can reach up to 0.3256°. It is also compared with GPS and INS (Inertial Navigation System) in the single-point measurement mode through an outdoor experiment. Through time compensation and location compensation, the sensor can be a useful alternative to GPS and INS. In addition, the sensor also can measure the polarization distribution pattern when it works in multi-point measurement mode. PMID:25051029

  4. Interactive facial caricaturing system based on eye camera

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Tsuyoshi; Tominaga, Masafumi; Koshimizu, Hiroyasu

    2003-04-01

    Face is the most effective visual media for supporting human interface and communication. We have proposed a typical KANSEI machine vision system to generate the facial caricature so far. The basic principle of this system uses the "mean face assumption" to extract individual features of a given face. This system did not provide for feedback from the gallery of the caricature; therefore, to allow for such feedback, in this paper, we propose a caricaturing system by using the KANSEI visual information acquired from the Eye-camera mounted on the head of a gallery, because it is well know that the gaze distribution represents not only where but also how he is looking at the face. The caricatures created in this way could be based on several measures which are provided from the distribution of the number of fixations to the facial parts, the number of times the gaze came to a particular area of the face, and the matrix of the transitions from a facial region to the other. These measures of the gallery"s KANSEI information were used to create caricatures with feedback from the gallery.

  5. Only Image Based for the 3d Metric Survey of Gothic Structures by Using Frame Cameras and Panoramic Cameras

    NASA Astrophysics Data System (ADS)

    Pérez Ramos, A.; Robleda Prieto, G.

    2016-06-01

    Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.

  6. A real-time camera calibration system based on OpenCV

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  7. 3D measurement and camera attitude estimation method based on trifocal tensor

    NASA Astrophysics Data System (ADS)

    Chen, Shengyi; Liu, Haibo; Yao, Linshen; Yu, Qifeng

    2016-11-01

    To simultaneously perform 3D measurement and camera attitude estimation, an efficient and robust method based on trifocal tensor is proposed in this paper, which only employs the intrinsic parameters and positions of three cameras. The initial trifocal tensor is obtained by using heteroscedastic errors-in-variables (HEIV) estimator and the initial relative poses of the three cameras is acquired by decomposing the tensor. Further the initial attitude of the cameras is obtained with knowledge of the three cameras' positions. Then the camera attitude and the interested points' image positions are optimized according to the constraint of trifocal tensor with the HEIV method. Finally the spatial positions of the points are obtained by using intersection measurement method. Both simulation and real image experiment results suggest that the proposed method achieves the same precision of the Bundle Adjustment (BA) method but be more efficient.

  8. Ultra Fast X-ray Streak Camera for TIM Based Platforms

    SciTech Connect

    Marley, E; Shepherd, R; Fulkerson, E S; James, L; Emig, J; Norman, D

    2012-05-02

    Ultra fast x-ray streak cameras are a staple for time resolved x-ray measurements. There is a need for a ten inch manipulator (TIM) based streak camera that can be fielded in a newer large scale laser facility. The LLNL ultra fast streak camera's drive electronics have been upgraded and redesigned to fit inside a TIM tube. The camera also has a new user interface that allows for remote control and data acquisition. The system has been outfitted with a new sensor package that gives the user more operational awareness and control.

  9. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  10. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  11. A four-lens based plenoptic camera for depth measurements

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe

    2015-04-01

    In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.

  12. Development of a treatment planning system for BNCT based on positron emission tomography data: preliminary results

    NASA Astrophysics Data System (ADS)

    Cerullo, N.; Daquino, G. G.; Muzi, L.; Esposito, J.

    2004-01-01

    Present standard treatment planning (TP) for glioblastoma multiforme (GBM - a kind of brain tumor), used in all boron neutron capture therapy (BNCT) trials, requires the construction (based on CT and/or MRI images) of a 3D model of the patient head, in which several regions, corresponding to different anatomical structures, are identified. The model is then employed by a computer code to simulate radiation transport in human tissues. The assumption is always made that considering a single value of boron concentration for each specific region will not lead to significant errors in dose computation. The concentration values are estimated "indirectly", on the basis of previous experience and blood sample analysis. This paper describes an original approach, with the introduction of data on the in vivo boron distribution, acquired by a positron emission tomography (PET) scan after labeling the BPA (borono-phenylalanine) with the positron emitter 18F. The feasibility of this approach was first tested with good results using the code CARONTE. Now a complete TPS is under development. The main features of the first version of this code are described and the results of a preliminary study are presented. Significant differences in dose computation arise when the two different approaches ("standard" and "PET-based") are applied to the TP of the same GBM case.

  13. The determination of the intrinsic and extrinsic parameters of virtual camera based on OpenGL

    NASA Astrophysics Data System (ADS)

    Li, Suqi; Zhang, Guangjun; Wei, Zhenzhong

    2006-11-01

    OpenGL is the international standard of 3D image. The 3D image generation by OpenGL is similar to the shoot by camera. This paper focuses on the application of OpenGL to computer vision, the OpenGL 3D image is regarded as virtual camera image. Firstly, the imaging mechanism of OpenGL has been analyzed in view of perspective projection transformation of computer vision camera. Then, the relationship between intrinsic and extrinsic parameters of camera and function parameters in OpenGL has been analysed, the transformation formulas have been deduced. Thereout the computer vision simulation has been realized. According to the comparison between the actual CCD camera images and virtual camera images(the parameters of actual camera are the same as virtual camera's) and the experiment results of stereo vision 3D reconstruction simulation, the effectiveness of the method with which the intrinsic and extrinsic parameters of virtual camera based on OpenGL are determined has been verified.

  14. Positron microprobe at LLNL

    SciTech Connect

    Asoka, P; Howell, R; Stoeffl, W

    1998-11-01

    The electron linac based positron source at Lawrence Livermore National Laboratory (LLNL) provides the world's highest current beam of keV positrons. We are building a positron microprobe that will produce a pulsed, focused positron beam for 3-dimensional scans of defect size and concentration with sub-micron resolution. The widely spaced and intense positron packets from the tungsten moderator at the end of the 100 MeV LLNL linac are captured and trapped in a magnetic bottle. The positrons are then released in 1 ns bunches at a 20 MHz repetition rate. With a three-stage re-moderation we will compress the cm-sized original beam to a 1 micro-meter diameter final spot on the target. The buncher will compress the arrival time of positrons on the target to less than 100 ps. A detector array with up to 60 BaF2 crystals in paired coincidence will measure the annihilation radiation with high efficiency and low background. The energy of the positrons can be varied from less than 1 keV up to 50 keV.

  15. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs

    NASA Astrophysics Data System (ADS)

    De Lorenzo, G.; Chmeissani, M.; Uzun, D.; Kolstein, M.; Ozsahin, I.; Mikhaylova, E.; Arce, P.; Cañadas, M.; Ariño, G.; Calderón, Y.

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events.

  16. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs.

    PubMed

    De Lorenzo, G; Chmeissani, M; Uzun, D; Kolstein, M; Ozsahin, I; Mikhaylova, E; Arce, P; Cañadas, M; Ariño, G; Calderón, Y

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events.

  17. Pixelated CdTe detectors to overcome intrinsic limitations of crystal based positron emission mammographs

    PubMed Central

    De Lorenzo, G.; Chmeissani, M.; Uzun, D.; Kolstein, M.; Ozsahin, I.; Mikhaylova, E.; Arce, P.; Cañadas, M.; Ariño, G.; Calderón, Y.

    2013-01-01

    A positron emission mammograph (PEM) is an organ dedicated positron emission tomography (PET) scanner for breast cancer detection. State-of-the-art PEMs employing scintillating crystals as detection medium can provide metabolic images of the breast with significantly higher sensitivity and specificity with respect to standard whole body PET scanners. Over the past few years, crystal PEMs have dramatically increased their importance in the diagnosis and treatment of early stage breast cancer. Nevertheless, designs based on scintillators are characterized by an intrinsic deficiency of the depth of interaction (DOI) information from relatively thick crystals constraining the size of the smallest detectable tumor. This work shows how to overcome such intrinsic limitation by substituting scintillating crystals with pixelated CdTe detectors. The proposed novel design is developed within the Voxel Imaging PET (VIP) Pathfinder project and evaluated via Monte Carlo simulation. The volumetric spatial resolution of the VIP-PEM is expected to be up to 6 times better than standard commercial devices with a point spread function of 1 mm full width at half maximum (FWHM) in all directions. Pixelated CdTe detectors can also provide an energy resolution as low as 1.5% FWHM at 511 keV for a virtually pure signal with negligible contribution from scattered events. PMID:23750176

  18. Inspection focus technology of space tridimensional mapping camera based on astigmatic method

    NASA Astrophysics Data System (ADS)

    Wang, Zhi; Zhang, Liping

    2010-10-01

    The CCD plane of the space tridimensional mapping camera will be deviated from the focal plane(including the CCD plane deviated due to camera focal length changed), under the condition of space environment and vibration, impact when satellite is launching, image resolution ratio will be descended because defocusing. For tridimensional mapping camera, principal point position and focal length variation of the camera affect positioning accuracy of ground target, conventional solution is under the condition of vacuum and focusing range, calibrate the position of CCD plane with code of photoelectric encoder, when the camera defocusing in orbit, the magnitude and direction of defocusing amount are obtained by photoelectric encoder, then the focusing mechanism driven by step motor to compensate defocusing amount of the CCD plane. For tridimensional mapping camera, under the condition of space environment and vibration, impact when satellite is launching, if the camera focal length changes, above focusing method has been meaningless. Thus, the measuring and focusing method was put forward based on astigmation, a quadrant detector was adopted to measure the astigmation caused by the deviation of the CCD plane, refer to calibrated relation between the CCD plane poison and the asrigmation, the deviation vector of the CCD plane can be obtained. This method includes all factors caused deviation of the CCD plane, experimental results show that the focusing resolution of mapping camera focusing mechanism based on astigmatic method can reach 0.25 μm.

  19. Base Intrusion Schottky Barrier IR Assessment Camera Study.

    DTIC Science & Technology

    1981-09-01

    detection line sensors. The program includes coverage studies to determine requirements for array size and camera complexity to provide cost-effective...addition, hardware studies are being conducted to determine design requirements and specifications for development and for future field testing of an...Since the early 1970s, RCA has been actively engaged in the development of IRI Schottky barrier line and area FPAs for the Air Force RADC Deputy for

  20. Streak camera based SLR receiver for two color atmospheric measurements

    NASA Technical Reports Server (NTRS)

    Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael

    1993-01-01

    To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data

  1. Design of an infrared camera based aircraft detection system for laser guide star installations

    SciTech Connect

    Friedman, H.; Macintosh, B.

    1996-03-05

    There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.

  2. One high-accuracy camera calibration algorithm based on computer vision images

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Huang, Jianming; Wei, Xiangquan

    2015-12-01

    Camera calibration is the first step of computer vision and one of the most active research fields nowadays. In order to improve the measurement precision, the internal parameters of the camera should be accurately calibrated. So one high-accuracy camera calibration algorithm is proposed based on the images of planar targets or tridimensional targets. By using the algorithm, the internal parameters of the camera are calibrated based on the existing planar target at the vision-based navigation experiment. The experimental results show that the accuracy of the proposed algorithm is obviously improved compared with the conventional linear algorithm, Tsai general algorithm, and Zhang Zhengyou calibration algorithm. The algorithm proposed by the article can satisfy the need of computer vision and provide reference for precise measurement of the relative position and attitude.

  3. Multitarget visual tracking based effective surveillance with cooperation of multiple active cameras.

    PubMed

    Huang, Cheng-Ming; Fu, Li-Chen

    2011-02-01

    This paper presents a tracking-based surveillance system that is capable of tracking multiple moving objects, with almost real-time response, through the effective cooperation of multiple pan-tilt cameras. To construct this surveillance system, the distributed camera agent, which tracks multiple moving objects independently, is first developed. The particle filter is extended with target depth estimate to track multiple targets that may overlap with one another. A strategy to select the suboptimal camera action is then proposed for a camera mounted on a pan-tilt platform that has been assigned to track multiple targets within its limited field of view simultaneously. This strategy is based on the mutual information and the Monte Carlo method to maintain coverage of the tracked targets. Finally, for a surveillance system with a small number of active cameras to effectively monitor a wide space, this system is aimed to maximize the number of targets to be tracked. We further propose a hierarchical camera selection and task assignment strategy, known as the online position strategy, to integrate all of the distributed camera agents. The overall performance of the multicamera surveillance system has been verified with computer simulations and extensive experiments.

  4. A method of diameter measurement for spur gear based on camera calibration

    NASA Astrophysics Data System (ADS)

    Wu, Ziyue; Geng, Jinfeng; Xu, Zhe

    2012-04-01

    The camera calibration is the basis of putting the computer vision technology into practice. This paper proposes a new method based on camera calibration for diameter measurement of gear, and analyses the error from calibration and measurement. Diameter values are gained by this method, which firstly gets the intrinsic parameters and the extrinsic parameters by camera calibration, then transforms the feature points in image coordinate extracted from the image plane of gear to the 3D world coordinate, lastly computes distance between the features points. The experiment results demonstrate that the method is simple and quick, and easy to implement, highly precise, and rarely limited to the size of target.

  5. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  6. Camera-based curvature measurement of a large incandescent object

    NASA Astrophysics Data System (ADS)

    Ollikkala, Arttu V. H.; Kananen, Timo P.; Mäkynen, Anssi J.; Holappa, Markus

    2013-04-01

    The goal of this work was to implement a low-cost machine vision system to help the roller operator to estimate the amount of strip camber during the rolling process. The machine vision system composing of a single camera, a standard PC-computer and a LabVIEW written program using straightforward image analysis determines the magnitude and direction of camber and presents the results both in numerical and graphical form on the computer screen. The system was calibrated with LED set-up which was also used to validate the accuracy of the system by mimicking the strip curvatures. The validation showed that the maximum difference between the true and measured values was less than +/-4 mm (k=0.95) within the 22 meter long test pattern.

  7. FPGA-based data acquisition system for a Compton camera

    NASA Astrophysics Data System (ADS)

    Nurdan, K.; Çonka-Nurdan, T.; Besch, H. J.; Freisleben, B.; Pavel, N. A.; Walenta, A. H.

    2003-09-01

    A data acquisition (DAQ) system with custom back-plane and custom readout boards has been developed for a Compton camera prototype. The DAQ system consists of two layers. The first layer has units for parallel high-speed analog-to-digital conversion and online data pre-processing. The second layer has a central board to form a general event trigger and to build the data structure for the event. This modularity and the use of field programmable gate arrays make the whole DAQ system highly flexible and adaptable to modified experimental setups. The design specifications, the general architecture of the Trigger and DAQ system and the implemented readout protocols are presented in this paper.

  8. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  9. Localization-based super-resolution microscopy with an sCMOS camera part II: experimental methodology for comparing sCMOS with EMCCD cameras.

    PubMed

    Long, Fan; Zeng, Shaoqun; Huang, Zhen-Li

    2012-07-30

    Nowadays, there is a hot debate among industry and academic researchers that whether the newly developed scientific-grade Complementary Metal Oxide Semiconductor (sCMOS) cameras could become the image sensors of choice in localization-based super-resolution microscopy. To help researchers find answers to this question, here we reported an experimental methodology for quantitatively comparing the performance of low-light cameras in single molecule detection (characterized via image SNR) and localization (via localization accuracy). We found that a newly launched sCMOS camera can present superior imaging performance than a popular Electron Multiplying Charge Coupled Device (EMCCD) camera in a signal range (15-12000 photon/pixel) more than enough for typical localization-based super-resolution microscopy.

  10. MARS: a mouse atlas registration system based on a planar x-ray projector and an optical camera

    NASA Astrophysics Data System (ADS)

    Wang, Hongkai; Stout, David B.; Taschereau, Richard; Gu, Zheng; Vu, Nam T.; Prout, David L.; Chatziioannou, Arion F.

    2012-10-01

    This paper introduces a mouse atlas registration system (MARS), composed of a stationary top-view x-ray projector and a side-view optical camera, coupled to a mouse atlas registration algorithm. This system uses the x-ray and optical images to guide a fully automatic co-registration of a mouse atlas with each subject, in order to provide anatomical reference for small animal molecular imaging systems such as positron emission tomography (PET). To facilitate the registration, a statistical atlas that accounts for inter-subject anatomical variations was constructed based on 83 organ-labeled mouse micro-computed tomography (CT) images. The statistical shape model and conditional Gaussian model techniques were used to register the atlas with the x-ray image and optical photo. The accuracy of the atlas registration was evaluated by comparing the registered atlas with the organ-labeled micro-CT images of the test subjects. The results showed excellent registration accuracy of the whole-body region, and good accuracy for the brain, liver, heart, lungs and kidneys. In its implementation, the MARS was integrated with a preclinical PET scanner to deliver combined PET/MARS imaging, and to facilitate atlas-assisted analysis of the preclinical PET images.

  11. Minicyclotron-based technology for the production of positron-emitting labelled radiopharmaceuticals

    SciTech Connect

    Barrio, J.R.; Bida, G.; Satyamurthy, N.; Padgett, H.C.; MacDonald, N.S.; Phelps, M.E.

    1983-01-01

    The use of short-lived positron emitters such as carbon 11, fluorine 18, nitrogen 13, and oxygen 15, together with positron-emission tomography (PET) for probing the dynamics of physiological and biochemical processes in the normal and diseased states in man is presently an active area of research. One of the pivotal elements for the continued growth and success of PET is the routine delivery of the desired positron emitting labelled compounds. To date, the cyclotron remains the accelerator of choice for production of medically useful radionuclides. The development of the technology to bring the use of cyclotrons to a clinical setting is discussed. (ACR)

  12. Comparison of FDG PET and positron coincidence detection imaging using a dual-head gamma camera with 5/8-inch NaI(Tl) crystals in patients with suspected body malignancies.

    PubMed

    Boren, E L; Delbeke, D; Patton, J A; Sandler, M P

    1999-04-01

    The purpose of this study was to compare the diagnostic accuracy of fluorine-18 fluorodeoxyglucose (FDG) images obtained with (a) a dual-head coincidence gamma camera (DHC) equipped with 5/8-inch-thick NaI(Tl) crystals and parallel slit collimators and (b) a dedicated positron emission tomograph (PET) in a series of 28 patients with known or suspected malignancies. Twenty-eight patients with known or suspected malignancies underwent whole-body FDG PET imaging (Siemens, ECAT 933) after injection of approximately 10 mCi of 18F-FDG. FDG DHC images were then acquired for 30 min over the regions of interest using a dual-head gamma camera (VariCam, Elscint). The images were reconstructed in the normal mode, using photopeak/photopeak, photopeak/Compton, and Compton/photopeak coincidence events. FDG PET imaging found 45 lesions ranging in size from 1 cm to 7 cm in 28 patients. FDG DHC imaging detected 35/45 (78%) of these lesions. Among the ten lesions not seen with FDG DHC imaging, eight were less than 1.5 cm in size, and two were located centrally within the abdomen suffering from marked attenuation effects. The lesions were classified into three categories: thorax (n=24), liver (n=12), and extrahepatic abdominal (n=9). FDG DHC imaging identified 100% of lesions above 1.5 cm in the thorax group and 78% of those below 1.5 cm, for an overall total of 83%. FDG DHC imaging identified 100% of lesions above 1.5 cm, in the liver and 43% of lesions below 1.5 cm, for an overall total of 67%. FDG DHC imaging identified 78% of lesions above 1.5 cm in the extrahepatic abdominal group. There were no lesions below 1.5 cm in this group. FDG coincidence imaging using a dual-head gamma camera detected 90% of lesions greater than 1.5 cm. These data suggest that DHC imaging can be used clinically in well-defined diagnostic situations to differentiate benign from malignant lesions.

  13. Multi-camera calibration based on openCV and multi-view registration

    NASA Astrophysics Data System (ADS)

    Deng, Xiao-ming; Wan, Xiong; Zhang, Zhi-min; Leng, Bi-yan; Lou, Ning-ning; He, Shuai

    2010-10-01

    For multi-camera calibration systems, a method based on OpenCV and multi-view registration combining calibration algorithm is proposed. First of all, using a Zhang's calibration plate (8X8 chessboard diagram) and a number of cameras (with three industrial-grade CCD) to be 9 group images shooting from different angles, using OpenCV to calibrate the parameters fast in the camera. Secondly, based on the corresponding relationship between each camera view, the computation of the rotation matrix and translation matrix is formulated as a constrained optimization problem. According to the Kuhn-Tucker theorem and the properties on the derivative of the matrix-valued function, the formulae of rotation matrix and translation matrix are deduced by using singular value decomposition algorithm. Afterwards an iterative method is utilized to get the entire coordinate transformation of pair-wise views, thus the precise multi-view registration can be conveniently achieved and then can get the relative positions in them(the camera outside the parameters).Experimental results show that the method is practical in multi-camera calibration .

  14. Medium Format Camera Evaluation Based on the Latest Phase One Technology

    NASA Astrophysics Data System (ADS)

    Tölg, T.; Kemper, G.; Kalinski, D.

    2016-06-01

    In early 2016, Phase One Industrial launched a new high resolution camera with a 100 MP CMOS sensor. CCD sensors excel at ISOs up to 200, but in lower light conditions, exposure time must be increased and Forward Motion Compensation (FMC) has to be employed to avoid smearing the images. The CMOS sensor has an ISO range of up to 6400, which enables short exposures instead of using FMC. This paper aims to evaluate the strengths of each of the sensor types based on real missions over a test field in Speyer, Germany, used for airborne camera calibration. The test field area has about 30 Ground Control Points (GCPs), which enable a perfect scenario for a proper geometric evaluation of the cameras. The test field includes both a Siemen star and scale bars to show any blurring caused by forward motion. The result of the comparison showed that both cameras offer high accuracy photogrammetric results with post processing, including triangulation, calibration, orthophoto and DEM generation. The forward motion effect can be compensated by a fast shutter speed and a higher ISO range of the CMOS-based camera. The results showed no significant differences between cameras.

  15. Defining habitat covariates in camera-trap based occupancy studies

    PubMed Central

    Niedballa, Jürgen; Sollmann, Rahel; Mohamed, Azlan bin; Bender, Johannes; Wilting, Andreas

    2015-01-01

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10–500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations. PMID:26596779

  16. Defining habitat covariates in camera-trap based occupancy studies.

    PubMed

    Niedballa, Jürgen; Sollmann, Rahel; bin Mohamed, Azlan; Bender, Johannes; Wilting, Andreas

    2015-11-24

    In species-habitat association studies, both the type and spatial scale of habitat covariates need to match the ecology of the focal species. We assessed the potential of high-resolution satellite imagery for generating habitat covariates using camera-trapping data from Sabah, Malaysian Borneo, within an occupancy framework. We tested the predictive power of covariates generated from satellite imagery at different resolutions and extents (focal patch sizes, 10-500 m around sample points) on estimates of occupancy patterns of six small to medium sized mammal species/species groups. High-resolution land cover information had considerably more model support for small, patchily distributed habitat features, whereas it had no advantage for large, homogeneous habitat features. A comparison of different focal patch sizes including remote sensing data and an in-situ measure showed that patches with a 50-m radius had most support for the target species. Thus, high-resolution satellite imagery proved to be particularly useful in heterogeneous landscapes, and can be used as a surrogate for certain in-situ measures, reducing field effort in logistically challenging environments. Additionally, remote sensed data provide more flexibility in defining appropriate spatial scales, which we show to impact estimates of wildlife-habitat associations.

  17. Research of aerial camera focal pane micro-displacement measurement system based on Michelson interferometer

    NASA Astrophysics Data System (ADS)

    Wang, Shu-juan; Zhao, Yu-liang; Li, Shu-jun

    2014-09-01

    The aerial camera focal plane in the correct position is critical to the imaging quality. In order to adjust the aerial camera focal plane displacement caused in the process of maintenance, a new micro-displacement measuring system of aerial camera focal plane in view of the Michelson interferometer has been designed in this paper, which is based on the phase modulation principle, and uses the interference effect to realize the focal plane of the micro-displacement measurement. The system takes He-Ne laser as the light source, uses the Michelson interference mechanism to produce interference fringes, changes with the motion of the aerial camera focal plane interference fringes periodically, and records the periodicity of the change of the interference fringes to obtain the aerial camera plane displacement; Taking linear CCD and its driving system as the interference fringes picking up tool, relying on the frequency conversion and differentiating system, the system determines the moving direction of the focal plane. After data collecting, filtering, amplifying, threshold comparing, counting, CCD video signals of the interference fringes are sent into the computer processed automatically, and output the focal plane micro displacement results. As a result, the focal plane micro displacement can be measured automatically by this system. This system uses linear CCD as the interference fringes picking up tool, greatly improving the counting accuracy and eliminated the artificial counting error almost, improving the measurement accuracy of the system. The results of the experiments demonstrate that: the aerial camera focal plane displacement measurement accuracy is 0.2nm. While tests in the laboratory and flight show that aerial camera focal plane positioning is accurate and can satisfy the requirement of the aerial camera imaging.

  18. Extrinsic calibration of a non-overlapping camera network based on close-range photogrammetry.

    PubMed

    Dong, Shuai; Shao, Xinxing; Kang, Xin; Yang, Fujun; He, Xiaoyuan

    2016-08-10

    In this paper, an extrinsic calibration method for a non-overlapping camera network is presented based on close-range photogrammetry. The method does not require calibration targets or the cameras to be moved. The visual sensors are relatively motionless and do not see the same area at the same time. The proposed method combines the multiple cameras using some arbitrarily distributed encoded targets. The calibration procedure consists of three steps: reconstructing the three-dimensional (3D) coordinates of the encoded targets using a hand-held digital camera, performing the intrinsic calibration of the camera network, and calibrating the extrinsic parameters of each camera with only one image. A series of experiments, including 3D reconstruction, rotation, and translation, are employed to validate the proposed approach. The results show that the relative error for the 3D reconstruction is smaller than 0.003%, the relative errors of both rotation and translation are less than 0.066%, and the re-projection error is only 0.09 pixels.

  19. Iterative reconstruction using a Monte Carlo based system transfer matrix for dedicated breast positron emission tomography

    SciTech Connect

    Saha, Krishnendu; Straus, Kenneth J.; Glick, Stephen J.; Chen, Yu.

    2014-08-28

    To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.

  20. 18F-Labeled Silicon-Based Fluoride Acceptors: Potential Opportunities for Novel Positron Emitting Radiopharmaceuticals

    PubMed Central

    Bernard-Gauthier, Vadim; Wängler, Carmen; Wängler, Bjoern; Schirrmacher, Ralf

    2014-01-01

    Background. Over the recent years, radiopharmaceutical chemistry has experienced a wide variety of innovative pushes towards finding both novel and unconventional radiochemical methods to introduce fluorine-18 into radiotracers for positron emission tomography (PET). These “nonclassical” labeling methodologies based on silicon-, boron-, and aluminium-18F chemistry deviate from commonplace bonding of an [18F]fluorine atom (18F) to either an aliphatic or aromatic carbon atom. One method in particular, the silicon-fluoride-acceptor isotopic exchange (SiFA-IE) approach, invalidates a dogma in radiochemistry that has been widely accepted for many years: the inability to obtain radiopharmaceuticals of high specific activity (SA) via simple IE. Methodology. The most advantageous feature of IE labeling in general is that labeling precursor and labeled radiotracer are chemically identical, eliminating the need to separate the radiotracer from its precursor. SiFA-IE chemistry proceeds in dipolar aprotic solvents at room temperature and below, entirely avoiding the formation of radioactive side products during the IE. Scope of Review. A great plethora of different SiFA species have been reported in the literature ranging from small prosthetic groups and other compounds of low molecular weight to labeled peptides and most recently affibody molecules. Conclusions. The literature over the last years (from 2006 to 2014) shows unambiguously that SiFA-IE and other silicon-based fluoride acceptor strategies relying on 18F− leaving group substitutions have the potential to become a valuable addition to radiochemistry. PMID:25157357

  1. Preclinical positron emission tomography scanner based on a monolithic annulus of scintillator: initial design study.

    PubMed

    Stolin, Alexander V; Martone, Peter F; Jaliparthi, Gangadhar; Raylman, Raymond R

    2017-01-01

    Positron emission tomography (PET) scanners designed for imaging of small animals have transformed translational research by reducing the necessity to invasively monitor physiology and disease progression. Virtually all of these scanners are based on the use of pixelated detector modules arranged in rings. This design, while generally successful, has some limitations. Specifically, use of discrete detector modules to construct PET scanners reduces detection sensitivity and can introduce artifacts in reconstructed images, requiring the use of correction methods. To address these challenges, and facilitate measurement of photon depth-of-interaction in the detector, we investigated a small animal PET scanner (called AnnPET) based on a monolithic annulus of scintillator. The scanner was created by placing 12 flat facets around the outer surface of the scintillator to accommodate placement of silicon photomultiplier arrays. Its performance characteristics were explored using Monte Carlo simulations and sections of the NEMA NU4-2008 protocol. Results from this study revealed that AnnPET's reconstructed spatial resolution is predicted to be [Formula: see text] full width at half maximum in the radial, tangential, and axial directions. Peak detection sensitivity is predicted to be 10.1%. Images of simulated phantoms (mini-hot rod and mouse whole body) yielded promising results, indicating the potential of this system for enhancing PET imaging of small animals.

  2. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  3. A novel image reconstruction methodology based on inverse Monte Carlo analysis for positron emission tomography

    NASA Astrophysics Data System (ADS)

    Kudrolli, Haris A.

    2001-04-01

    A three dimensional (3D) reconstruction procedure for Positron Emission Tomography (PET) based on inverse Monte Carlo analysis is presented. PET is a medical imaging modality which employs a positron emitting radio-tracer to give functional images of an organ's metabolic activity. This makes PET an invaluable tool in the detection of cancer and for in-vivo biochemical measurements. There are a number of analytical and iterative algorithms for image reconstruction of PET data. Analytical algorithms are computationally fast, but the assumptions intrinsic in the line integral model limit their accuracy. Iterative algorithms can apply accurate models for reconstruction and give improvements in image quality, but at an increased computational cost. These algorithms require the explicit calculation of the system response matrix, which may not be easy to calculate. This matrix gives the probability that a photon emitted from a certain source element will be detected in a particular detector line of response. The ``Three Dimensional Stochastic Sampling'' (SS3D) procedure implements iterative algorithms in a manner that does not require the explicit calculation of the system response matrix. It uses Monte Carlo techniques to simulate the process of photon emission from a source distribution and interaction with the detector. This technique has the advantage of being able to model complex detector systems and also take into account the physics of gamma ray interaction within the source and detector systems, which leads to an accurate image estimate. A series of simulation studies was conducted to validate the method using the Maximum Likelihood - Expectation Maximization (ML-EM) algorithm. The accuracy of the reconstructed images was improved by using an algorithm that required a priori knowledge of the source distribution. Means to reduce the computational time for reconstruction were explored by using parallel processors and algorithms that had faster convergence rates

  4. FPGA-Based Front-End Electronics for Positron Emission Tomography.

    PubMed

    Haselman, Michael; Dewitt, Don; McDougald, Wendy; Lewellen, Thomas K; Miyaoka, Robert; Hauck, Scott

    2009-02-22

    Modern Field Programmable Gate Arrays (FPGAs) are capable of performing complex discrete signal processing algorithms with clock rates above 100MHz. This combined with FPGA's low expense, ease of use, and selected dedicated hardware make them an ideal technology for a data acquisition system for positron emission tomography (PET) scanners. Our laboratory is producing a high-resolution, small-animal PET scanner that utilizes FPGAs as the core of the front-end electronics. For this next generation scanner, functions that are typically performed in dedicated circuits, or offline, are being migrated to the FPGA. This will not only simplify the electronics, but the features of modern FPGAs can be utilizes to add significant signal processing power to produce higher resolution images. In this paper two such processes, sub-clock rate pulse timing and event localization, will be discussed in detail. We show that timing performed in the FPGA can achieve a resolution that is suitable for small-animal scanners, and will outperform the analog version given a low enough sampling period for the ADC. We will also show that the position of events in the scanner can be determined in real time using a statistical positioning based algorithm.

  5. Low background high efficiency radiocesium detection system based on positron emission tomography technology

    SciTech Connect

    Yamamoto, Seiichi; Ogata, Yoshimune

    2013-09-15

    After the 2011 nuclear power plant accident at Fukushima, radiocesium contamination in food became a serious concern in Japan. However, low background and high efficiency radiocesium detectors are expensive and huge, including semiconductor germanium detectors. To solve this problem, we developed a radiocesium detector by employing positron emission tomography (PET) technology. Because {sup 134}Cs emits two gamma photons (795 and 605 keV) within 5 ps, they can selectively be measured with coincidence. Such major environmental gamma photons as {sup 40}K (1.46 MeV) are single photon emitters and a coincidence measurement reduces the detection limit of radiocesium detectors. We arranged eight sets of Bi{sub 4}Ge{sub 3}O{sub 12} (BGO) scintillation detectors in double rings (four for each ring) and measured the coincidence between these detectors using PET data acquisition system. A 50 × 50 × 30 mm BGO was optically coupled to a 2 in. square photomultiplier tube (PMT). By measuring the coincidence, we eliminated most single gamma photons from the energy distribution and only detected those from {sup 134}Cs at an average efficiency of 12%. The minimum detectable concentration of the system for the 100 s acquisition time is less than half of the food monitor requirements in Japan (25 Bq/kg). These results show that the developed radiocesium detector based on PET technology is promising to detect low level radiocesium.

  6. Optimization of light field display-camera configuration based on display properties in spectral domain.

    PubMed

    Bregović, Robert; Kovács, Péter Tamás; Gotchev, Atanas

    2016-02-08

    The visualization capability of a light field display is uniquely determined by its angular and spatial resolution referred to as display passband. In this paper we use a multidimensional sampling model for describing the display-camera channel. Based on the model, for a given display passband, we propose a methodology for determining the optimal distribution of ray generators in a projection-based light field display. We also discuss the required camera setup that can provide data with the necessary amount of details for such display that maximizes the visual quality and minimizes the amount of data.

  7. Positron Physics

    NASA Technical Reports Server (NTRS)

    Drachman, Richard J.

    2003-01-01

    I will give a review of the history of low-energy positron physics, experimental and theoretical, concentrating on the type of work pioneered by John Humberston and the positronics group at University College. This subject became a legitimate subfield of atomic physics under the enthusiastic direction of the late Sir Harrie Massey, and it attracted a diverse following throughout the world. At first purely theoretical, the subject has now expanded to include high brightness beams of low-energy positrons, positronium beams, and, lately, experiments involving anti-hydrogen atoms. The theory requires a certain type of persistence in its practitioners, as well as an eagerness to try new mathematical and numerical techniques. I will conclude with a short summary of some of the most interesting recent advances.

  8. Secure chaotic map based block cryptosystem with application to camera sensor networks.

    PubMed

    Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled

    2011-01-01

    Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network.

  9. Submap joining smoothing and mapping for camera-based indoor localization and mapping

    NASA Astrophysics Data System (ADS)

    Bjärkefur, J.; Karlsson, A.; Grönwall, C.; Rydell, J.

    2011-06-01

    Personnel positioning is important for safety in e.g. emergency response operations. In GPS-denied environments, possible positioning solutions include systems based on radio frequency communication, inertial sensors, and cameras. Many camera-based systems create a map and localize themselves relative to that. The computational complexity of most such solutions grows rapidly with the size of the map. One way to reduce the complexity is to divide the visited region into submaps. This paper presents a novel method for merging conditionally independent submaps (generated using e.g. EKF-SLAM) by the use of smoothing. Using this approach it is possible to build large maps in close to linear time. The method is demonstrated in two indoor scenarios, where data was collected with a trolley-mounted stereo vision camera.

  10. Development of an angled Si-PM-based detector unit for positron emission mammography (PEM) system

    NASA Astrophysics Data System (ADS)

    Nakanishi, Kouhei; Yamamoto, Seiichi

    2016-11-01

    Positron emission mammography (PEM) systems have higher sensitivity than clinical whole body PET systems because they have a smaller ring diameter. However, the spatial resolution of PEM systems is not high enough to detect early stage breast cancer. To solve this problem, we developed a silicon photomultiplier (Si-PM) based detector unit for the development of a PEM system. Since a Si-PM's channel is small, Si-PM can resolve small scintillator pixels to improve the spatial resolution. Also Si-PM based detectors have inherently high timing resolution and are able to reduce the random coincidence events by reducing the time window. We used 1.5×1.9×15 mm LGSO scintillation pixels and arranged them in an 8×24 matrix to form scintillator blocks. Four scintillator blocks were optically coupled to Si-PM arrays with an angled light guide to form a detector unit. Since the light guide has angles of 5.625°, we can arrange 64 scintillator blocks in a nearly circular shape (a regular 64-sided polygon) using 16 detector units. We clearly resolved the pixels of the scintillator blocks in a 2-dimensional position histogram where the averages of the peak-to-valley ratios (P/Vs) were 3.7±0.3 and 5.7±0.8 in the transverse and axial directions, respectively. The average energy resolution was 14.2±2.1% full-width at half-maximum (FWHM). By including the temperature dependent gain control electronics, the photo-peak channel shifts were controlled within ±1.5% with the temperature from 23 °C to 28 °C. With these results, in addition to the potential high timing performance of Si-PM based detectors, our developed detector unit is promising for the development of a high-resolution PEM system.

  11. Defocus compensation system of long focal aerial camera based on auto-collimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yu-ye; Zhao, Yu-liang; Xu, Zhao-lin

    2010-10-01

    Nowadays, novel aerial reconnaissance camera emphasizes on the shooting performance in high altitude or in long distance of oblique photography. In order to obtain the larger scale pictures which are easier for image interpretation, we need the camera has long focal length. But long focal length camera is easier to be influenced by environmental condition and lead to great change of lens' back focus which can result in the lens' resolution decreased greatly. So, we should do precise defocusing compensation to long focal aerial camera system. In order to realize defocusing compensation, a defocusing compensation system based on autocollimation is designed. Firstly, the reason which can lead to long focal camera's defocusing was discussed, then the factors such as changes of atmospheric pressure and temperature and oblique photographic distance were pointed out, and mathematical equation which could compute camera's defocusing amount was presented. Secondly, after camera's defocusing was analyzed, electro-optical autocollimation of higher automation and intelligent was adopted in the system. Before shooting , focal surface was located by electro-optical autocollimation focal detection mechanism, the data of airplane's height was imported through electronic control system. Defocusing amount was corrected by computing defocusing amount and the signal was send to focusing control motor. And an efficient improved mountain climb-searching algorithm was adopted for focal surface locating in the correction process. When confirming the direction of curve, the improved algorithm considered both twice focusing results and four points. If four points continue raised, the curve would be confirmed as rising direction. On the other hand, if four points continue decreased, the curve would be confirmed as decrease direction. In this way, we could avoid the local peak value appeared in two focusing steps. The defocusing compensation system consists of optical component and precise

  12. Calculation of positron observables using a finite-element-based approach

    SciTech Connect

    Klein, B. M.; Pask, J. E.; Sterne, P.

    1998-11-04

    We report the development of a new method for calculating positron observables using a finite-element approach for the solution of the Schrodinger equation. This method combines the advantages of both basis-set and real-space-grid approaches. The strict locality in real space of the finite element basis functions results in a method that is well suited for calculating large systems of a thousand or more atoms, as required for calculations of extended defects such as dislocations. In addition, the method is variational in nature and its convergence can be controlled systematically. The calculation of positron observables is straightforward due to the real-space nature of this method. We illustrate the power of this method with positron lifetime calculations on defects and defect-free materials, using overlapping atomic charge densities.

  13. A cryogenically cooled, ultra-high-energy-resolution, trap-based positron beam

    SciTech Connect

    Natisin, M. R. Danielson, J. R.; Surko, C. M.

    2016-01-11

    A technique is described to produce a pulsed, magnetically guided positron beam with significantly improved beam characteristics over those available previously. A pulsed, room-temperature positron beam from a buffer gas trap is used as input to a trap that captures the positrons, compresses them both radially and axially, and cools them to 50 K on a cryogenic CO buffer gas before ejecting them as a pulsed beam. The total energy spread of the beam formed using this technique is 6.9 ± 0.7 meV FWHM, which is a factor of ∼5 better than the previous state-of-the-art, while simultaneously having sub-microsecond temporal resolution and millimeter spatial resolution. Possible further improvements in beam quality are discussed.

  14. Medium field of view multiflat panel-based portable gamma camera

    NASA Astrophysics Data System (ADS)

    Giménez, M.; Benlloch, J. M.; Cerdá, J.; Escat, B.; Fernández, M.; Giménez, E. N.; Lerche, Ch. W.; Martínez, J. D.; Mora, F. J.; Pavón, N.; Sánchez, F.; Sebastià, A.

    2004-06-01

    A portable gamma camera based on the multianode technology has been built and tested. The camera consists in optically coupling four "Flat Panel" H8500 PSPMTs to a 100×100×4 mm 3 CsI(Na) continuous scintillation crystal. The dimensions of the camera are 17×12×12 cm 3 including the pinhole collimator and it weighs a total of 2 kg. Its average spatial resolution is 2 mm, its energy resolution is about 15%, and it shows a field of view of 95 mm. Because of its portability, its FOV and its cost, it is a convenient choice for osteological, renal, mammary, and endocrine (thyroid, parathyroid and suprarenal) scintigraphies, as well as other important applications such as intraoperatory detection of lymph nodes and surgical oncology. We describe the simulations performed which explain the crystal choice, the mechanical design of the camera and the method of calibration and algorithms used for position, energy and uniformity correction. We present images taken from phantoms. We plan to increase the camera sensitivity by using a four-holes collimator in combination with the MLEM algorithm, in order to decrease the exploration time and to reduce the dose given to the patient.

  15. Clinical CT-based calculations of dose and positron emitter distributions in proton therapy using the FLUKA Monte Carlo code

    NASA Astrophysics Data System (ADS)

    Parodi, K.; Ferrari, A.; Sommerer, F.; Paganetti, H.

    2007-07-01

    Clinical investigations on post-irradiation PET/CT (positron emission tomography/computed tomography) imaging for in vivo verification of treatment delivery and, in particular, beam range in proton therapy are underway at Massachusetts General Hospital (MGH). Within this project, we have developed a Monte Carlo framework for CT-based calculation of dose and irradiation-induced positron emitter distributions. Initial proton beam information is provided by a separate Geant4 Monte Carlo simulation modelling the treatment head. Particle transport in the patient is performed in the CT voxel geometry using the FLUKA Monte Carlo code. The implementation uses a discrete number of different tissue types with composition and mean density deduced from the CT scan. Scaling factors are introduced to account for the continuous Hounsfield unit dependence of the mass density and of the relative stopping power ratio to water used by the treatment planning system (XiO (Computerized Medical Systems Inc.)). Resulting Monte Carlo dose distributions are generally found in good correspondence with calculations of the treatment planning program, except a few cases (e.g. in the presence of air/tissue interfaces). Whereas dose is computed using standard FLUKA utilities, positron emitter distributions are calculated by internally combining proton fluence with experimental and evaluated cross-sections yielding 11C, 15O, 14O, 13N, 38K and 30P. Simulated positron emitter distributions yield PET images in good agreement with measurements. In this paper, we describe in detail the specific implementation of the FLUKA calculation framework, which may be easily adapted to handle arbitrary phase spaces of proton beams delivered by other facilities or include more reaction channels based on additional cross-section data. Further, we demonstrate the effects of different acquisition time regimes (e.g., PET imaging during or after irradiation) on the intensity and spatial distribution of the irradiation

  16. Crystal identification in positron emission tomography using nonrigid registration to a Fourier-based template

    PubMed Central

    Chaudhari, Abhijit J.; Joshi, Anand A.; Bowen, Spencer L.; Leahy, Richard M.; Cherry, Simon R.; Badawi, Ramsey D.

    2009-01-01

    Modern Positron Emission Tomography (PET) detectors typically are made from 2D modular arrays of scintillation crystals. Their characteristic flood field response (or flood histogram) must be segmented in order to correctly determine the crystal of annihilation photon interaction in the system. Crystal identification information thus generated is also needed for accurate system modeling as well as for detailed detector characterization and performance studies. In this paper, we present a semi-automatic general purpose template-guided scheme for segmentation of flood histograms. We first generate a template image that exploits the spatial frequency information in the given flood histogram using Fourier-space analysis. This template image is a lower order approximation of the flood histogram, and can be segmented with horizontal and vertical lines drawn midway between adjacent peaks in the histogram. The template is then registered to the given flood histogram by a diffeomorphic polynomial-based warping scheme that is capable of iteratively minimizing intensity differences. The displacement field thus calculated is applied to the segmentation of the template resulting in a segmentation of the given flood histogram. We evaluate our segmentation scheme for a photomultiplier tube-based PET detector, a detector with readout by a position-sensitive avalanche photodiode (PSAPD) and a detector consisting of a stack of photomultiplier tubes and scintillator arrays. Further, we quantitatively compare the performance of the proposed method to that of a manual segmentation scheme using reconstructed images of a line source phantom. We also present an adaptive method for distortion reduction in flood histograms obtained for PET detectors that use PSAPDs. PMID:18723924

  17. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  18. Omnidirectional stereo vision sensor based on single camera and catoptric system.

    PubMed

    Zhou, Fuqiang; Chai, Xinghua; Chen, Xin; Song, Ya

    2016-09-01

    An omnidirectional stereo vision sensor based on one single camera and catoptric system is proposed. As crucial components, one camera and two pyramid mirrors are used for imaging. The omnidirectional measurement towards different directions in the horizontal field can be performed by four pairs of virtual cameras, with a consummate synchronism and an improved compactness. Moreover, the perspective projection invariance is ensured in the imaging process, which avoids the imaging distortion reflected by the curved mirrors. In this paper, the structure model of the sensor was established and a sensor prototype was designed. The influences of the structural parameters on the field of view and the measurement accuracy were also discussed. In addition, real experiments and analyses were performed to evaluate the performance of the proposed sensor in the measurement application. The results proved the feasibility of the sensor, and exhibited a considerable accuracy in 3D coordinate reconstruction.

  19. Portable Positron Measurement System (PPMS)

    SciTech Connect

    2011-01-01

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  20. Portable Positron Measurement System (PPMS)

    ScienceCinema

    None

    2016-07-12

    Portable Positron Measurement System (PPMS) is an automated, non-destructive inspection system based on positron annihilation, which characterizes a material's in situatomic-level properties during the manufacturing processes of formation, solidification, and heat treatment. Simultaneous manufacturing and quality monitoring now are possible. Learn more about the lab's project on our facebook site http://www.facebook.com/idahonationallaboratory.

  1. Cost Effective Paper-Based Colorimetric Microfluidic Devices and Mobile Phone Camera Readers for the Classroom

    ERIC Educational Resources Information Center

    Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.

    2015-01-01

    We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…

  2. Monte Carlo-based evaluation of S-values in mouse models for positron-emitting radionuclides

    NASA Astrophysics Data System (ADS)

    Xie, Tianwu; Zaidi, Habib

    2013-01-01

    In addition to being a powerful clinical tool, Positron emission tomography (PET) is also used in small laboratory animal research to visualize and track certain molecular processes associated with diseases such as cancer, heart disease and neurological disorders in living small animal models of disease. However, dosimetric characteristics in small animal PET imaging are usually overlooked, though the radiation dose may not be negligible. In this work, we constructed 17 mouse models of different body mass and size based on the realistic four-dimensional MOBY mouse model. Particle (photons, electrons and positrons) transport using the Monte Carlo method was performed to calculate the absorbed fractions and S-values for eight positron-emitting radionuclides (C-11, N-13, O-15, F-18, Cu-64, Ga-68, Y-86 and I-124). Among these radionuclides, O-15 emits positrons with high energy and frequency and produces the highest self-absorbed S-values in each organ, while Y-86 emits γ-rays with high energy and frequency which results in the highest cross-absorbed S-values for non-neighbouring organs. Differences between S-values for self-irradiated organs were between 2% and 3%/g difference in body weight for most organs. For organs irradiating other organs outside the splanchnocoele (i.e. brain, testis and bladder), differences between S-values were lower than 1%/g. These appealing results can be used to assess variations in small animal dosimetry as a function of total-body mass. The generated database of S-values for various radionuclides can be used in the assessment of radiation dose to mice from different radiotracers in small animal PET experiments, thus offering quantitative figures for comparative dosimetry research in small animal models.

  3. Positron and Ion Migrations and the Attractive Interactions between like Ion Pairs in the Liquids: Based on Studies with Slow Positron Beam

    NASA Astrophysics Data System (ADS)

    Kanazawa, I.; Sasaki, T.; Yamada, K.; Imai, E.

    2014-04-01

    We have discussed positron and ion diffusions in liquids by using the gauge-invariant effection Lagrange density with the spontaneously broken density (the hedgehog-like density) with the internal non-linear gauge fields (Yaug-Mills gauge fields), and have presented the relation to the Hubbard-Onsager theory.

  4. A method of color correction of camera based on HSV model

    NASA Astrophysics Data System (ADS)

    Zhao, Rujin; Wang, Jin; Yu, Guobing; Zhong, Jie; Zhou, Wulin; Li, Yihao

    2014-09-01

    A novel color correction method of camera based on HSV (Hue, Saturation, and Value) model is proposed in this paper, which aims at the problem that spectrum response of camera differs from the CIE criterion, and that the image color of camera is aberrant. Firstly, the color of image is corrected based on HSV model to which image is transformed from RGB model. As a result, the color of image accords with the human vision for the coherence between HSV model and human vision; Secondly, the colors checker with 24 kinds of color under standard light source is used to compute correction coefficient matrix, which improves the spectrum response of camera and the CIE criterion. Furthermore, the colors checker with 24 kinds of color improves the applicability of the color correction coefficient matrix for different image. The experimental results show that the color difference between corrected color and color checker is lower based on proposed method, and the corrected color of image is consistent with the human eyes.

  5. Note: A manifold ranking based saliency detection method for camera

    NASA Astrophysics Data System (ADS)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  6. A Kinect™ camera based navigation system for percutaneous abdominal puncture

    NASA Astrophysics Data System (ADS)

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-01

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect™ was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect™. For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect™ depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator’s skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect™ for Windows version 2 (Kinect™ V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator’s skill and trajectory are observed. Additionally, a Kinect™ for Windows version 1 (Kinect™ V1) was tested with 12 insertions, and the TRE evaluated with the Kinect™ V1 is statistically significantly larger than that with the Kinect™ V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is acceptable

  7. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    NASA Astrophysics Data System (ADS)

    Kishimoto, A.; Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T.; Ohsuka, S.

    2017-02-01

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd3Al2Ga3O12 (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a 137Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources (22Na [511 keV], 137Cs [662 keV], and 54Mn [834 keV]).

  8. A novel multi slit X-ray backscatter camera based on synthetic aperture focusing

    NASA Astrophysics Data System (ADS)

    Wieder, Frank; Ewert, Uwe; Vogel, Justus; Jaenisch, Gerd-Rüdiger; Bellon, Carsten

    2017-02-01

    A special slit collimator was developed earlier for fast acquisition of X-ray back scatter images. The design was based on a twisted slit design (ruled surfaces) in a Tungsten block to acquire backscatter images. The comparison with alternative techniques as flying spot and coded aperture pin hole technique could not prove the expected higher contrast sensitivity. In analogy to the coded aperture technique, a novel multi slit camera was designed and tested. Several twisted slits were parallelly arranged in a metal block. The CAD design of different multi-slit cameras was evaluated and optimized by the computer simulation packages aRTist and McRay. The camera projects a set of equal images, one per slit, to the digital detector array, which are overlaying each other. Afterwards, the aperture is corrected based on a deconvolution algorithm to focus the overlaying projections into a single representation of the object. Furthermore, a correction of the geometrical distortions due to the slit geometry is performed. The expected increase of the contrast-to-noise ratio is proportional to the square root of the number of parallel slits in the camera. However, additional noise has to be considered originating from the deconvolution operation. The slit design, functional principle, and the expected limits of this technique are discussed.

  9. Positron program at the Idaho Accelerator Center

    SciTech Connect

    Stancari, Giulio

    2009-09-02

    Positron physics is an important part of the research activities at the Idaho Accelerator Center (IAC). With positron annihilation spectroscopy, maps of nanodefects in materials have been obtained. For this purpose, positrons are generated by radioactive decay, photoactivation, or pair production. Preliminary tests of positron sources in the MeV range based on electron linacs have also been carried out at the IAC, and an expansion of this program is planned. A similar positron beam at Jefferson Lab would greatly improve our knowledge of the inner structure of the proton. In this paper, research with positrons at the IAC is reviewed. After a description of the Center's facilities, results from positron annihilation spectroscopy are discussed, together with future plans for testing a prototype positron source for CEBAF.

  10. Data acquisition system based on the Nios II for a CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Hu, Keliang; Wang, Chunrong; Liu, Yangbing; He, Chun

    2006-06-01

    The FPGA with Avalon Bus architecture and Nios soft-core processor developed by Altera Corporation is an advanced embedded solution for control and interface systems. A CCD data acquisition system with an Ethernet terminal port based on the TCP/IP protocol is implemented in NAOC, which is composed of a piece of interface board with an Altera's FPGA, 32MB SDRAM and some other accessory devices integrated on it, and two packages of control software used in the Nios II embedded processor and the remote host PC respectively. The system is used to replace a 7200 series image acquisition card which is inserted in a control and data acquisition PC, and to download commands to an existing CCD camera and collect image data from the camera to the PC. The embedded chip in the system is a Cyclone FPGA with a configurable Nios II soft-core processor. Hardware structure of the system, configuration for the embedded soft-core processor, and peripherals of the processor in the PFGA are described. The C program run in the Nios II embedded system is built in the Nios II IDE kits and the C++ program used in the PC is developed in the Microsoft's Visual C++ environment. Some key techniques in design and implementation of the C and VC++ programs are presented, including the downloading of the camera commands, initialization of the camera, DMA control, TCP/IP communication and UDP data uploading.

  11. Performance Analysis of a Low-Cost Triangulation-Based 3d Camera: Microsoft Kinect System

    NASA Astrophysics Data System (ADS)

    . K. Chow, J. C.; Ang, K. D.; Lichti, D. D.; Teskey, W. F.

    2012-07-01

    Recent technological advancements have made active imaging sensors popular for 3D modelling and motion tracking. The 3D coordinates of signalised targets are traditionally estimated by matching conjugate points in overlapping images. Current 3D cameras can acquire point clouds at video frame rates from a single exposure station. In the area of 3D cameras, Microsoft and PrimeSense have collaborated and developed an active 3D camera based on the triangulation principle, known as the Kinect system. This off-the-shelf system costs less than 150 USD and has drawn a lot of attention from the robotics, computer vision, and photogrammetry disciplines. In this paper, the prospect of using the Kinect system for precise engineering applications was evaluated. The geometric quality of the Kinect system as a function of the scene (i.e. variation of depth, ambient light conditions, incidence angle, and object reflectivity) and the sensor (i.e. warm-up time and distance averaging) were analysed quantitatively. This system's potential in human body measurements was tested against a laser scanner and 3D range camera. A new calibration model for simultaneously determining the exterior orientation parameters, interior orientation parameters, boresight angles, leverarm, and object space features parameters was developed and the effectiveness of this calibration approach was explored.

  12. Pixel-based characterisation of CMOS high-speed camera systems

    NASA Astrophysics Data System (ADS)

    Weber, V.; Brübach, J.; Gordon, R. L.; Dreizler, A.

    2011-05-01

    Quantifying high-repetition rate laser diagnostic techniques for measuring scalars in turbulent combustion relies on a complete description of the relationship between detected photons and the signal produced by the detector. CMOS-chip based cameras are becoming an accepted tool for capturing high frame rate cinematographic sequences for laser-based techniques such as Particle Image Velocimetry (PIV) and Planar Laser Induced Fluorescence (PLIF) and can be used with thermographic phosphors to determine surface temperatures. At low repetition rates, imaging techniques have benefitted from significant developments in the quality of CCD-based camera systems, particularly with the uniformity of pixel response and minimal non-linearities in the photon-to-signal conversion. The state of the art in CMOS technology displays a significant number of technical aspects that must be accounted for before these detectors can be used for quantitative diagnostics. This paper addresses these issues.

  13. Table-top laser-based source of femtosecond, collimated, ultrarelativistic positron beams.

    PubMed

    Sarri, G; Schumaker, W; Di Piazza, A; Vargas, M; Dromey, B; Dieckmann, M E; Chvykov, V; Maksimchuk, A; Yanovsky, V; He, Z H; Hou, B X; Nees, J A; Thomas, A G R; Keitel, C H; Zepf, M; Krushelnick, K

    2013-06-21

    The generation of ultrarelativistic positron beams with short duration (τ(e+) ≃ 30  fs), small divergence (θ(e+) ≃ 3  mrad), and high density (n(e+) ≃ 10(14)-10(15)  cm(-3)) from a fully optical setup is reported. The detected positron beam propagates with a high-density electron beam and γ rays of similar spectral shape and peak energy, thus closely resembling the structure of an astrophysical leptonic jet. It is envisaged that this experimental evidence, besides the intrinsic relevance to laser-driven particle acceleration, may open the pathway for the small-scale study of astrophysical leptonic jets in the laboratory.

  14. Positron emission tomography (PET) imaging with 18F-based radiotracers

    PubMed Central

    Alauddin, Mian M

    2012-01-01

    Positron Emission Tomography (PET) is a nuclear medicine imaging technique that is widely used in early detection and treatment follow up of many diseases, including cancer. This modality requires positron-emitting isotope labeled biomolecules, which are synthesized prior to perform imaging studies. Fluorine-18 is one of the several isotopes of fluorine that is routinely used in radiolabeling of biomolecules for PET; because of its positron emitting property and favorable half-life of 109.8 min. The biologically active molecule most commonly used for PET is 2-deoxy-2-18F-fluoro-β-D-glucose (18F-FDG), an analogue of glucose, for early detection of tumors. The concentrations of tracer accumulation (PET image) demonstrate the metabolic activity of tissues in terms of regional glucose metabolism and accumulation. Other tracers are also used in PET to image the tissue concentration. In this review, information on fluorination and radiofluorination reactions, radiofluorinating agents, and radiolabeling of various compounds and their application in PET imaging is presented. PMID:23133802

  15. Cloud Base Height Measurements at Manila Observatory: Initial Results from Constructed Paired Sky Imaging Cameras

    NASA Astrophysics Data System (ADS)

    Lagrosas, N.; Tan, F.; Antioquia, C. T.

    2014-12-01

    Fabricated all sky imagers are efficient and cost effective instruments for cloud detection and classification. Continuous operation of this instrument can result in the determination of cloud occurrence and cloud base heights for the paired system. In this study, a fabricated paired sky imaging system - consisting two commercial digital cameras (Canon Powershot A2300) enclosed in weatherproof containers - is developed in Manila Observatory for the purpose of determining cloud base heights at the Manila Observatory area. One of the cameras is placed on the rooftop of Manila Observatory and the other is placed on the rooftop of the university dormitory, 489m from the first camera. The cameras are programmed to simultaneously gather pictures every 5 min. Continuous operation of these cameras were implemented since the end of May of 2014 but data collection started end of October 2013. The data were processed following the algorithm proposed by Kassianov et al (2005). The processing involves the calculation of the merit function that determines the area of overlap of the two pictures. When two pictures are overlapped, the minimum of the merit function corresponds to the pixel column positions where the pictures have the best overlap. In this study, pictures of overcast sky prove to be difficult to process for cloud base height and were excluded from processing. The figure below shows the initial results of the hourly average of cloud base heights from data collected from November 2013 to July 2014. Measured cloud base heights ranged from 250m to 1.5km. These are the heights of cumulus and nimbus clouds that are dominant in this part of the world. Cloud base heights are low in the early hours of the day indicating low convection process during these times. However, the increase in the convection process in the atmosphere can be deduced from higher cloud base heights in the afternoon. The decrease of cloud base heights after 15:00 follows the trend of decreasing solar

  16. Volcano geodesy at Santiaguito using ground-based cameras and particle image velocimetry

    NASA Astrophysics Data System (ADS)

    Johnson, J.; Andrews, B. J.; Anderson, J.; Lyons, J. J.; Lees, J. M.

    2012-12-01

    The active Santiaguito dome in Guatemala is an exceptional field site for ground-based optical observations owing to the bird's-eye viewing perspective from neighboring Santa Maria Volcano. From the summit of Santa Maria the frequent (1 per hour) explosions and continuous lava flow effusion may be observed from a vantage point, which is at a ~30 degree elevation angle, 1200 m above and 2700 m distant from the active vent. At these distances both video cameras and SLR cameras fitted with high-power lenses can effectively track blocky features translating and uplifting on the surface of Santiaguito's dome. We employ particle image velocimetry in the spatial frequency domain to map movements of ~10x10 m^2 surface patches with better than 10 cm displacement resolution. During three field campaigns to Santiaguito in 2007, 2009, and 2012 we have used cameras to measure dome surface movements for a range of time scales. In 2007 and 2009 we used video cameras recording at 30 fps to track repeated rapid dome uplift (more than 1 m within 2 s) of the 30,000 m^2 dome associated with the onset of eruptive activity. We inferred that the these uplift events were responsible for both a seismic long period response and an infrasound bimodal pulse. In 2012 we returned to Santiaguito to quantify dome surface movements over hour-to-day-long time scales by recording time lapse imagery at one minute intervals. These longer time scales reveal dynamic structure to the uplift and subsidence trends, effusion rate, and surface flow patterns that are related to internal conduit pressurization. In 2012 we performed particle image velocimetry with multiple cameras spatially separated in order to reconstruct 3-dimensional surface movements.

  17. The Japanese Positron Factory

    NASA Astrophysics Data System (ADS)

    Okada, S.; Sunaga, H.; Kaneko, H.; Takizawa, H.; Kawasuso, A.; Yotsumoto, K.; Tanaka, R.

    1999-06-01

    The Positron Factory has been planned at Japan Atomic Energy Research Institute (JAERI). The factory is expected to produce linac-based monoenergetic positron beams having world-highest intensities of more than 1010e+/sec, which will be applied for R&D of materials science, biotechnology and basic physics & chemistry. In this article, results of the design studies are demonstrated for the following essential components of the facilities: 1) Conceptual design of a high-power electron linac with 100 MeV in beam energy and 100 kW in averaged beam power, 2) Performance tests of the RF window in the high-power klystron and of the electron beam window, 3) Development of a self-driven rotating electron-to-positron converter and the performance tests, 4) Proposal of multi-channel beam generation system for monoenergetic positrons, with a series of moderator assemblies based on a newly developed Monte Carlo simulation and the demonstrative experiment, 5) Proposal of highly efficient moderator structures, 6) Conceptual design of a local shield to suppress the surrounding radiation and activation levels.

  18. Camera-Based Control for Industrial Robots Using OpenCV Libraries

    NASA Astrophysics Data System (ADS)

    Seidel, Patrick A.; Böhnke, Kay

    This paper describes a control system for industrial robots whose reactions base on the analysis of images provided by a camera mounted on top of the robot. We show that such control system can be designed and implemented with an open source image processing library and cheap hardware. Using one specific robot as an example, we demonstrate the structure of a possible control algorithm running on a PC and its interaction with the robot.

  19. Using ground-based stereo cameras to derive cloud-level wind fields.

    PubMed

    Porter, John N; Cao, Guang Xia

    2009-08-15

    Upper-level wind fields are obtained by tracking the motion of cloud features as seen in calibrated ground-based stereo cameras. By tracking many cloud features, it is possible to obtain horizontal wind speed and direction over a cone area throughout the troposphere. Preliminary measurements were made at the Mauna Loa Observatory, and resulting wind measurements are compared with winds from the Hilo, Hawaii radiosondes.

  20. The computation of cloud base height from paired whole-sky imaging cameras

    SciTech Connect

    Allmen, M.C.; Kegelmeyer, W.P. Jr.

    1994-03-01

    A major goal for global change studies is to improve the accuracy of general circulation models (GCMs) capable of predicting the timing and magnitude of greenhouse gas-induced global warming. Research has shown that cloud radiative feedback is the single most important effect determining the magnitude of possible climate responses to human activity. Of particular value to reducing the uncertainties associated with cloud-radiation interactions is the measurement of cloud base height (CBH), both because it is a dominant factor in determining the infrared radiative properties of clouds with respect to the earth`s surface and lower atmosphere and because CBHs are essential to measuring cloud cover fraction. We have developed a novel approach to the extraction of cloud base height from pairs of whole sky imaging (WSI) cameras. The core problem is to spatially register cloud fields from widely separated WSI cameras; this complete, triangulation provides the CBH measurements. The wide camera separation (necessary to cover the desired observation area) and the self-similarity of clouds defeats all standard matching algorithms when applied to static views of the sky. To address this, our approach is based on optical flow methods that exploit the fact that modern WSIs provide sequences of images. We will describe the algorithm and present its performance as evaluated both on real data validated by ceilometer measurements and on a variety of simulated cases.

  1. Obstacle classification and 3D measurement in unstructured environments based on ToF cameras.

    PubMed

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-06-18

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient.

  2. PETIROC2 based readout electronics optimization for Gamma Cameras and PET detectors

    NASA Astrophysics Data System (ADS)

    Monzo, J. M.; Aguilar, A.; González-Montoro, A.; Lamprou, E.; González, A. J.; Hernández, L.; Mazur, D.; Colom, R. J.; Benlloch, J. M.

    2017-02-01

    Developing front-end electronics to improve charge detection and time resolution in gamma-ray detectors is one of the main tasks to improve performance in new multimodal imaging systems that merge information of Magnetic Resonance Imaging and Gamma Camera or PET tomographs. The aim of this work is to study the behaviour and to optimize the performance of an ASIC for PET and Gamma Camera applications based on SiPMs detectors. PETIROC2 is a commercial ASIC developed by Weeroc to provide accurate charge and time coincidence resolutions. It has 32 analog input channels that are independently managed. Each channel is divided into two signals, one for time stamping using a TDC and another for charge measurement. In this work, PETIROC2 is evaluated in an experimental setup composed of two pixelated LYSO crystals based detectors, each coupled to a Hamamatsu 4×4 SiPM array. Both detectors are working in coincidence with a separation distance between them that can be modified. In the present work, an energy resolution of 13.6% FWHM and a time coincidence resolution of 815 ps FWHM have been obtained. These results will be useful to optimize and improve PETIROC2 based PET and Gamma Camera systems.

  3. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors.

    PubMed

    Calderón, Y; Chmeissani, M; Kolstein, M; De Lorenzo, G

    2014-06-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm(2) area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm(3). The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 10(4) Bq source activities with equal efficiency and is completely saturated at 10(9) Bq. The efficiency of the system is evaluated using a simulated (18)F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8.

  4. Shadow detection in camera-based vehicle detection: survey and analysis

    NASA Astrophysics Data System (ADS)

    Barcellos, Pablo; Gomes, Vitor; Scharcanski, Jacob

    2016-09-01

    The number of vehicles in circulation in modern urban centers has greatly increased, which motivates the development of automatic traffic monitoring systems. Consequently, camera-based traffic monitoring systems are becoming more widely used, since they offer important technological advantages in comparison with traditional traffic monitoring systems (e.g., simpler maintenance and more flexibility for the design of practical configurations). The segmentation of the foreground (i.e., vehicles) is a fundamental step in the workflow of a camera-based traffic monitoring system. However, foreground segmentation can be negatively affected by vehicle shadows. This paper discusses the types of shadow detection methods available in the literature, their advantages, disadvantages, and in which situations these methods can improve camera-based vehicle detection for traffic monitoring. In order to compare the performance of these different types of shadow detection methods, experiments are conducted with typical methods of each category using publicly available datasets. This work shows that shadow detection definitely can improve the reliability of traffic monitoring systems, but the choice of the type of shadow method depends on the system specifications (e.g., tolerated error), the availability of computational resources, and prior information about the scene and its illumination in regular operation conditions.

  5. Obstacle Classification and 3D Measurement in Unstructured Environments Based on ToF Cameras

    PubMed Central

    Yu, Hongshan; Zhu, Jiang; Wang, Yaonan; Jia, Wenyan; Sun, Mingui; Tang, Yandong

    2014-01-01

    Inspired by the human 3D visual perception system, we present an obstacle detection and classification method based on the use of Time-of-Flight (ToF) cameras for robotic navigation in unstructured environments. The ToF camera provides 3D sensing by capturing an image along with per-pixel 3D space information. Based on this valuable feature and human knowledge of navigation, the proposed method first removes irrelevant regions which do not affect robot's movement from the scene. In the second step, regions of interest are detected and clustered as possible obstacles using both 3D information and intensity image obtained by the ToF camera. Consequently, a multiple relevance vector machine (RVM) classifier is designed to classify obstacles into four possible classes based on the terrain traversability and geometrical features of the obstacles. Finally, experimental results in various unstructured environments are presented to verify the robustness and performance of the proposed approach. We have found that, compared with the existing obstacle recognition methods, the new approach is more accurate and efficient. PMID:24945679

  6. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  7. Evaluation of Compton gamma camera prototype based on pixelated CdTe detectors

    PubMed Central

    Calderón, Y.; Chmeissani, M.; Kolstein, M.; De Lorenzo, G.

    2014-01-01

    A proposed Compton camera prototype based on pixelated CdTe is simulated and evaluated in order to establish its feasibility and expected performance in real laboratory tests. The system is based on module units containing a 2×4 array of square CdTe detectors of 10×10 mm2 area and 2 mm thickness. The detectors are pixelated and stacked forming a 3D detector with voxel sizes of 2 × 1 × 2 mm3. The camera performance is simulated with Geant4-based Architecture for Medicine-Oriented Simulations(GAMOS) and the Origin Ensemble(OE) algorithm is used for the image reconstruction. The simulation shows that the camera can operate with up to 104 Bq source activities with equal efficiency and is completely saturated at 109 Bq. The efficiency of the system is evaluated using a simulated 18F point source phantom in the center of the Field-of-View (FOV) achieving an intrinsic efficiency of 0.4 counts per second per kilobecquerel. The spatial resolution measured from the point spread function (PSF) shows a FWHM of 1.5 mm along the direction perpendicular to the scatterer, making it possible to distinguish two points at 3 mm separation with a peak-to-valley ratio of 8. PMID:24932209

  8. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring.

    PubMed

    Pinto, M; Dauvergne, D; Freud, N; Krimmer, J; Letang, J M; Ray, C; Roellinghoff, F; Testa, E

    2014-12-21

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  9. An enhanced high-resolution EMCCD-based gamma camera using SiPM side detection.

    PubMed

    Heemskerk, J W T; Korevaar, M A N; Huizenga, J; Kreuger, R; Schaart, D R; Goorden, M C; Beekman, F J

    2010-11-21

    Electron-multiplying charge-coupled devices (EMCCDs) coupled to scintillation crystals can be used for high-resolution imaging of gamma rays in scintillation counting mode. However, the detection of false events as a result of EMCCD noise deteriorates the spatial and energy resolution of these gamma cameras and creates a detrimental background in the reconstructed image. In order to improve the performance of an EMCCD-based gamma camera with a monolithic scintillation crystal, arrays of silicon photon-multipliers (SiPMs) can be mounted on the sides of the crystal to detect escaping scintillation photons, which are otherwise neglected. This will provide a priori knowledge about the correct number and energies of gamma interactions that are to be detected in each CCD frame. This information can be used as an additional detection criterion, e.g. for the rejection of otherwise falsely detected events. The method was tested using a gamma camera based on a back-illuminated EMCCD, coupled to a 3 mm thick continuous CsI:Tl crystal. Twelve SiPMs have been mounted on the sides of the CsI:Tl crystal. When the information of the SiPMs is used to select scintillation events in the EMCCD image, the background level for (99m)Tc is reduced by a factor of 2. Furthermore, the SiPMs enable detection of (125)I scintillations. A hybrid SiPM-/EMCCD-based gamma camera thus offers great potential for applications such as in vivo imaging of gamma emitters.

  10. Monte Carlo simulations of compact gamma cameras based on avalanche photodiodes.

    PubMed

    Després, Philippe; Funk, Tobias; Shah, Kanai S; Hasegawa, Bruce H

    2007-06-07

    Avalanche photodiodes (APDs), and in particular position-sensitive avalanche photodiodes (PSAPDs), are an attractive alternative to photomultiplier tubes (PMTs) for reading out scintillators for PET and SPECT. These solid-state devices offer high gain and quantum efficiency, and can potentially lead to more compact and robust imaging systems with improved spatial and energy resolution. In order to evaluate this performance improvement, we have conducted Monte Carlo simulations of gamma cameras based on avalanche photodiodes. Specifically, we investigated the relative merit of discrete and PSAPDs in a simple continuous crystal gamma camera. The simulated camera was composed of either a 4 x 4 array of four channels 8 x 8 mm2 PSAPDs or an 8 x 8 array of 4 x 4 mm2 discrete APDs. These configurations, requiring 64 channels readout each, were used to read the scintillation light from a 6 mm thick continuous CsI:Tl crystal covering the entire 3.6 x 3.6 cm2 photodiode array. The simulations, conducted with GEANT4, accounted for the optical properties of the materials, the noise characteristics of the photodiodes and the nonlinear charge division in PSAPDs. The performance of the simulated camera was evaluated in terms of spatial resolution, energy resolution and spatial uniformity at 99mTc (140 keV) and 125I ( approximately 30 keV) energies. Intrinsic spatial resolutions of 1.0 and 0.9 mm were obtained for the APD- and PSAPD-based cameras respectively for 99mTc, and corresponding values of 1.2 and 1.3 mm FWHM for 125I. The simulations yielded maximal energy resolutions of 7% and 23% for 99mTc and 125I, respectively. PSAPDs also provided better spatial uniformity than APDs in the simple system studied. These results suggest that APDs constitute an attractive technology especially suitable to build compact, small field of view gamma cameras dedicated, for example, to small animal or organ imaging.

  11. Materials analysis using positron beam lifetime spectroscopy

    SciTech Connect

    Hartley, J.; Howell, R. H., Asoka-Kumar, P.; Sterne, P.; Stoeffl, W.

    1998-11-12

    We are using a defect analysis capabilities based on two positron beam lifetime spectrometers: the first is based on a 3 MeV electrostatic accelerator and the second on our high current linac beam. The high energy beam lifetime spectrometer is routinely used to perform positron lifetime analysis with a 3 MeV positron beam on thick sample specimens. It is being used for bulk sample analysis and analysis of samples encapsulated in controlled environments for in situ measurements. A second, low energy, microscopically focused, pulsed positron beam for defect analysis by positron lifetime spectroscopy is under development at the LLNL high current positron source. This beam will enable defect-specific, 3-dimensional maps of defect concentration with sub-micron location resolution. When coupled with first principles calculations of defect specific positron lifetimes it will enable new levels of defect concentration mapping and defect identification.

  12. Positron emission mammography imaging

    SciTech Connect

    Moses, William W.

    2003-10-02

    This paper examines current trends in Positron Emission Mammography (PEM) instrumentation and the performance tradeoffs inherent in them. The most common geometry is a pair of parallel planes of detector modules. They subtend a larger solid angle around the breast than conventional PET cameras, and so have both higher efficiency and lower cost. Extensions to this geometry include encircling the breast, measuring the depth of interaction (DOI), and dual-modality imaging (PEM and x-ray mammography, as well as PEM and x-ray guided biopsy). The ultimate utility of PEM may not be decided by instrument performance, but by biological and medical factors, such as the patient to patient variation in radiotracer uptake or the as yet undetermined role of PEM in breast cancer diagnosis and treatment.

  13. 3D point cloud registration based on the assistant camera and Harris-SIFT

    NASA Astrophysics Data System (ADS)

    Zhang, Yue; Yu, HongYang

    2016-07-01

    3D(Three-Dimensional) point cloud registration technology is the hot topic in the field of 3D reconstruction, but most of the registration method is not real-time and ineffective. This paper proposes a point cloud registration method of 3D reconstruction based on Harris-SIFT and assistant camera. The assistant camera is used to pinpoint mobile 3D reconstruction device, The feature points of images are detected by using Harris operator, the main orientation for each feature point is calculated, and lastly, the feature point descriptors are generated after rotating the coordinates of the descriptors relative to the feature points' main orientations. Experimental results of demonstrate the effectiveness of the proposed method.

  14. Optical character recognition of camera-captured images based on phase features

    NASA Astrophysics Data System (ADS)

    Diaz-Escobar, Julia; Kober, Vitaly

    2015-09-01

    Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.

  15. Improved camera calibration method based on perpendicularity compensation for binocular stereo vision measurement system.

    PubMed

    Jia, Zhenyuan; Yang, Jinghao; Liu, Wei; Wang, Fuji; Liu, Yang; Wang, Lingli; Fan, Chaonan; Zhao, Kai

    2015-06-15

    High-precision calibration of binocular vision systems plays an important role in accurate dimensional measurements. In this paper, an improved camera calibration method is proposed. First, an accurate intrinsic parameters calibration method based on active vision with perpendicularity compensation is developed. Compared to the previous work, this method eliminates the effect of non-perpendicularity of the camera motion on calibration accuracy. The principal point, scale factors, and distortion factors are calculated independently in this method, thereby allowing the strong coupling of these parameters to be eliminated. Second, an accurate global optimization method with only 5 images is presented. The results of calibration experiments show that the accuracy of the calibration method can reach 99.91%.

  16. Characterization of a CCD-camera-based system for measurement of the solar radial energy distribution

    NASA Astrophysics Data System (ADS)

    Gambardella, A.; Galleano, R.

    2011-10-01

    Charge-coupled device (CCD)-camera-based measurement systems offer the possibility to gather information on the solar radial energy distribution (sunshape). Sunshape measurements are very useful in designing high concentration photovoltaic systems and heliostats as they collect light only within a narrow field of view, the dimension of which has to be defined in the context of several different system design parameters. However, in this regard the CCD camera response needs to be adequately characterized. In this paper, uncertainty components for optical and other CCD-specific sources have been evaluated using indoor test procedures. We have considered CCD linearity and background noise, blooming, lens aberration, exposure time linearity and quantization error. Uncertainty calculation showed that a 0.94% (k = 2) combined expanded uncertainty on the solar radial energy distribution can be assumed.

  17. Secure Chaotic Map Based Block Cryptosystem with Application to Camera Sensor Networks

    PubMed Central

    Guo, Xianfeng; Zhang, Jiashu; Khan, Muhammad Khurram; Alghathbar, Khaled

    2011-01-01

    Recently, Wang et al. presented an efficient logistic map based block encryption system. The encryption system employs feedback ciphertext to achieve plaintext dependence of sub-keys. Unfortunately, we discovered that their scheme is unable to withstand key stream attack. To improve its security, this paper proposes a novel chaotic map based block cryptosystem. At the same time, a secure architecture for camera sensor network is constructed. The network comprises a set of inexpensive camera sensors to capture the images, a sink node equipped with sufficient computation and storage capabilities and a data processing server. The transmission security between the sink node and the server is gained by utilizing the improved cipher. Both theoretical analysis and simulation results indicate that the improved algorithm can overcome the flaws and maintain all the merits of the original cryptosystem. In addition, computational costs and efficiency of the proposed scheme are encouraging for the practical implementation in the real environment as well as camera sensor network. PMID:22319371

  18. Treatment modification of yttrium-90 radioembolization based on quantitative positron emission tomography/CT imaging.

    PubMed

    Chang, Ted T; Bourgeois, Austin C; Balius, Anastasia M; Pasciak, Alexander S

    2013-03-01

    Treatment activity for yttrium-90 ((90)Y) radioembolization when calculated by using the manufacturer-recommended technique is only partially patient-specific and may result in a subtumoricidal dose in some patients. The authors describe the use of quantitative (90)Y positron emission tomography/computed tomography as a tool to provide patient-specific optimization of treatment activity and evaluate this new method in a patient who previously received traditional (90)Y radioembolization. The modified treatment resulted in a 40-Gy increase in absorbed dose to tumor and complete resolution of disease in the treated area within 3 months.

  19. Infrared line cameras based on linear arrays for industrial temperature measurement

    NASA Astrophysics Data System (ADS)

    Drogmoeller, Peter; Hofmann, Guenter; Budzier, Helmut; Reichardt, Thomas; Zimmerhackl, Manfred

    2002-03-01

    The PYROLINE/ MikroLine cameras provide continuous, non-contact measurement of linear temperature distributions. Operation in conjunction with the IR_LINE software provides data recording, real-time graphical analysis, process integration and camera-control capabilities. One system is based on pyroelectric line sensors with either 128 or 256 elements, operating at frame rates of 128 and 544 Hz respectively. Temperatures between 0 and 1300DGRC are measurable in four distinct spectral ranges; 8-14micrometers for low temperatures, 3-5micrometers for medium temperatures, 4.8-5.2micrometers for glass-temperature applications and 1.4-1.8micrometers for high temperatures. A newly developed IR-line camera (HRP 250) based upon a thermoelectrically cooled, 160-element, PbSe detector array operating in the 3 - 5 micrometers spectral range permits the thermal gradients of fast moving targets to be measured in the range 50 - 180 degree(s)C at a maximum frequency of 18kHz. This special system was used to measure temperature distributions on rotating tires at velocities of more than 300 km/h (190 mph). A modified version of this device was used for real-time measurement of disk-brake rotors under load. Another line camera consisting a 256 element InGaAs array was developed for the spectral range of 1.4 - 1.8 micrometers to detect impurities of polypropylene and polyethylene in raw cotton at frequencies of 2.5 - 5 kHz.

  20. AOTF-based NO2 camera, results from the AROMAT-2 campaign

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Fussen, Didier; Vanhamel, Jurgen; Van Opstal, Bert; Maes, Jeroen; Merlaud, Alexis; Stebel, Kerstin; Schuettemeyer, Dirk

    2016-04-01

    A hyperspectral imager based on an acousto-optical tunable filter (AOTF) has been developed in the frame of the ALTIUS mission (atmospheric limb tracker for the investigation of the upcoming stratosphere). ALTIUS is a three-channel (UV, VIS, NIR) space-borne limb sounder aiming at the retrieval of concentration profiles of important trace species (O3, NO2, aerosols and more) with a good vertical resolution. An optical breadboard was built from the VIS channel concept and is now serving as a ground-based remote sensing instrument. Its good spectral resolution (0.6nm) coupled to its natural imaging capabilities (6° square field of view sampled by a 512x512 pixels sensor) make it suitable for the measurement of 2D fields of NO2, similarly to what is nowadays achieved with SO2 cameras. Our NO2 camera was one of the instruments that took part to the second Airborne ROmanian Measurements of Aerosols and Trace gases (AROMAT-2) campaign in August 2015. It was pointed to the smokestacks of the coal and oil burning power plant of Turceni (Romania) in order to image the exhausted field of NO2 and derive slant columns and instantaneous emission fluxes. The ultimate goal of the AROMAT campaigns is to prepare the validation of TROPOMI onboard Sentinel-5P. We will briefly describe the instrumental concept of the NO2 camera, its heritage from the ALTIUS mission, and its advantages compared to previous attempts of reaching the same goal. Key results obtained with the camera during the AROMAT-2 campaign will be presented and further improvements will be discussed.

  1. Intense source of slow positrons

    NASA Astrophysics Data System (ADS)

    Perez, P.; Rosowsky, A.

    2004-10-01

    We describe a novel design for an intense source of slow positrons based on pair production with a beam of electrons from a 10 MeV accelerator hitting a thin target at a low incidence angle. The positrons are collected with a set of coils adapted to the large production angle. The collection system is designed to inject the positrons into a Greaves-Surko trap (Phys. Rev. A 46 (1992) 5696). Such a source could be the basis for a series of experiments in fundamental and applied research and would also be a prototype source for industrial applications, which concern the field of defect characterization in the nanometer scale.

  2. Automatic control of a robot camera for broadcasting based on cameramen's techniques and subjective evaluation and analysis of reproduced images.

    PubMed

    Kato, D; Katsuura, T; Koyama, H

    2000-03-01

    With the goal of achieving an intelligent robot camera system that can take dynamic images automatically through humanlike, natural camera work, we analyzed how images were shot, subjectively evaluated reproduced images, and examined effects of camerawork, using camera control technique as a parameter. It was found that (1) A high evaluation is obtained when human-based data are used for the position adjusting velocity curve of the target; (2) Evaluation scores are relatively high for images taken with feedback-feedforward camera control method for target movement in one direction; (3) Keeping the target within the image area using the control method that imitates human camera handling becomes increasingly difficult when the target changes both direction and velocity and becomes bigger and faster, and (4) The mechanical feedback method can cope with rapid changes in the target's direction and velocity, constantly keeping the target within the image area, though the viewer finds the image rather mechanical as opposed to humanlike.

  3. Spin polarized low-energy positron source

    NASA Astrophysics Data System (ADS)

    Petrov, V. N.; Samarin, S. N.; Sudarshan, K.; Pravica, L.; Guagliardo, P.; Williams, J. F.

    2015-06-01

    This paper presents an investigation of spin polarization of positrons from a source based on the decay of 22Na isotopes. Positrons are moderated by transmission through a tungsten film and electrostatically focussed and transported through a 90 deg deflector to produce a slow positron beam with polarization vector normal to the linear momentum. The polarization of the beam was determined to be about 10% by comparison with polarized electron scattering asymmetries from a thin Fe film on W(110) at 10-10 Torr. Low energy electron emission from Fe layer on W(100) surfaces under positron impact is explored. It is shown that the intensity asymmetry of the electron emission as a function of the incident positron energy can be used to estimate the polarization of the positron beam. Also several materials with long mean free paths for spin relaxation are considered as possible moderators with increased polarization of the emergent positrons.

  4. Development of plenoptic infrared camera using low dimensional material based photodetectors

    NASA Astrophysics Data System (ADS)

    Chen, Liangliang

    Infrared (IR) sensor has extended imaging from submicron visible spectrum to tens of microns wavelength, which has been widely used for military and civilian application. The conventional bulk semiconductor materials based IR cameras suffer from low frame rate, low resolution, temperature dependent and highly cost, while the unusual Carbon Nanotube (CNT), low dimensional material based nanotechnology has been made much progress in research and industry. The unique properties of CNT lead to investigate CNT based IR photodetectors and imaging system, resolving the sensitivity, speed and cooling difficulties in state of the art IR imagings. The reliability and stability is critical to the transition from nano science to nano engineering especially for infrared sensing. It is not only for the fundamental understanding of CNT photoresponse induced processes, but also for the development of a novel infrared sensitive material with unique optical and electrical features. In the proposed research, the sandwich-structured sensor was fabricated within two polymer layers. The substrate polyimide provided sensor with isolation to background noise, and top parylene packing blocked humid environmental factors. At the same time, the fabrication process was optimized by real time electrical detection dielectrophoresis and multiple annealing to improve fabrication yield and sensor performance. The nanoscale infrared photodetector was characterized by digital microscopy and precise linear stage in order for fully understanding it. Besides, the low noise, high gain readout system was designed together with CNT photodetector to make the nano sensor IR camera available. To explore more of infrared light, we employ compressive sensing algorithm into light field sampling, 3-D camera and compressive video sensing. The redundant of whole light field, including angular images for light field, binocular images for 3-D camera and temporal information of video streams, are extracted and

  5. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-01-01

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body. PMID:28300783

  6. Measurement of food volume based on single 2-D image without conventional camera calibration.

    PubMed

    Yue, Yaofeng; Jia, Wenyan; Sun, Mingui

    2012-01-01

    Food portion size measurement combined with a database of calories and nutrients is important in the study of metabolic disorders such as obesity and diabetes. In this work, we present a convenient and accurate approach to the calculation of food volume by measuring several dimensions using a single 2-D image as the input. This approach does not require the conventional checkerboard based camera calibration since it is burdensome in practice. The only prior requirements of our approach are: 1) a circular container with a known size, such as a plate, a bowl or a cup, is present in the image, and 2) the picture is taken under a reasonable assumption that the camera is always held level with respect to its left and right sides and its lens is tilted down towards foods on the dining table. We show that, under these conditions, our approach provides a closed form solution to camera calibration, allowing convenient measurement of food portion size using digital pictures.

  7. Validity and repeatability of a depth camera-based surface imaging system for thigh volume measurement.

    PubMed

    Bullas, Alice M; Choppin, Simon; Heller, Ben; Wheat, Jon

    2016-10-01

    Complex anthropometrics such as area and volume, can identify changes in body size and shape that are not detectable with traditional anthropometrics of lengths, breadths, skinfolds and girths. However, taking these complex with manual techniques (tape measurement and water displacement) is often unsuitable. Three-dimensional (3D) surface imaging systems are quick and accurate alternatives to manual techniques but their use is restricted by cost, complexity and limited access. We have developed a novel low-cost, accessible and portable 3D surface imaging system based on consumer depth cameras. The aim of this study was to determine the validity and repeatability of the system in the measurement of thigh volume. The thigh volumes of 36 participants were measured with the depth camera system and a high precision commercially available 3D surface imaging system (3dMD). The depth camera system used within this study is highly repeatable (technical error of measurement (TEM) of <1.0% intra-calibration and ~2.0% inter-calibration) but systematically overestimates (~6%) thigh volume when compared to the 3dMD system. This suggests poor agreement yet a close relationship, which once corrected can yield a usable thigh volume measurement.

  8. Person Recognition System Based on a Combination of Body Images from Visible Light and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Hong, Hyung Gil; Kim, Ki Wan; Park, Kang Ryoung

    2017-03-16

    The human body contains identity information that can be used for the person recognition (verification/recognition) problem. In this paper, we propose a person recognition method using the information extracted from body images. Our research is novel in the following three ways compared to previous studies. First, we use the images of human body for recognizing individuals. To overcome the limitations of previous studies on body-based person recognition that use only visible light images for recognition, we use human body images captured by two different kinds of camera, including a visible light camera and a thermal camera. The use of two different kinds of body image helps us to reduce the effects of noise, background, and variation in the appearance of a human body. Second, we apply a state-of-the art method, called convolutional neural network (CNN) among various available methods, for image features extraction in order to overcome the limitations of traditional hand-designed image feature extraction methods. Finally, with the extracted image features from body images, the recognition task is performed by measuring the distance between the input and enrolled samples. The experimental results show that the proposed method is efficient for enhancing recognition accuracy compared to systems that use only visible light or thermal images of the human body.

  9. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    PubMed Central

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  10. Random versus Game Trail-Based Camera Trap Placement Strategy for Monitoring Terrestrial Mammal Communities

    PubMed Central

    Cusack, Jeremy J.; Dickman, Amy J.; Rowcliffe, J. Marcus; Carbone, Chris; Macdonald, David W.; Coulson, Tim

    2015-01-01

    Camera trap surveys exclusively targeting features of the landscape that increase the probability of photographing one or several focal species are commonly used to draw inferences on the richness, composition and structure of entire mammal communities. However, these studies ignore expected biases in species detection arising from sampling only a limited set of potential habitat features. In this study, we test the influence of camera trap placement strategy on community-level inferences by carrying out two spatially and temporally concurrent surveys of medium to large terrestrial mammal species within Tanzania’s Ruaha National Park, employing either strictly game trail-based or strictly random camera placements. We compared the richness, composition and structure of the two observed communities, and evaluated what makes a species significantly more likely to be caught at trail placements. Observed communities differed marginally in their richness and composition, although differences were more noticeable during the wet season and for low levels of sampling effort. Lognormal models provided the best fit to rank abundance distributions describing the structure of all observed communities, regardless of survey type or season. Despite this, carnivore species were more likely to be detected at trail placements relative to random ones during the dry season, as were larger bodied species during the wet season. Our findings suggest that, given adequate sampling effort (> 1400 camera trap nights), placement strategy is unlikely to affect inferences made at the community level. However, surveys should consider more carefully their choice of placement strategy when targeting specific taxonomic or trophic groups. PMID:25950183

  11. Camera characterization using back-propagation artificial neutral network based on Munsell system

    NASA Astrophysics Data System (ADS)

    Liu, Ye; Yu, Hongfei; Shi, Junsheng

    2008-02-01

    The camera output RGB signals do not directly corresponded to the tristimulus values based on the CIE standard colorimetric observer, i.e., it is a device-independent color space. For achieving accurate color information, we need to do color characterization, which can be used to derive a transformation between camera RGB values and CIE XYZ values. In this paper we set up a Back-Propagation (BP) artificial neutral network to realize the mapping from camera RGB to CIE XYZ. We used the Munsell Book of Color with total number 1267 as color samples. Each patch of the Munsell Book of Color was recorded by camera, and the RGB values could be obtained. The Munsell Book of Color were taken in a light booth and the surround was kept dark. The viewing/illuminating geometry was 0/45 using D 65 illuminate. The lighting illuminating the reference target needs to be as uniform as possible. The BP network was a 5-layer one and (3-10-10-10-3), which was selected through our experiments. 1000 training samples were selected randomly from the 1267 samples, and the rest 267 samples were as the testing samples. Experimental results show that the mean color difference between the reproduced colors and target colors is 0.5 CIELAB color-difference unit, which was smaller than the biggest acceptable color difference 2 CIELAB color-difference unit. The results satisfy some applications for the more accurate color measurements, such as medical diagnostics, cosmetics production, the color reappearance of different media, etc.

  12. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy

    NASA Astrophysics Data System (ADS)

    Barabas, Federico M.; Masullo, Luciano A.; Stefani, Fernando D.

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  13. Note: Tormenta: An open source Python-powered control software for camera based optical microscopy.

    PubMed

    Barabas, Federico M; Masullo, Luciano A; Stefani, Fernando D

    2016-12-01

    Until recently, PC control and synchronization of scientific instruments was only possible through closed-source expensive frameworks like National Instruments' LabVIEW. Nowadays, efficient cost-free alternatives are available in the context of a continuously growing community of open-source software developers. Here, we report on Tormenta, a modular open-source software for the control of camera-based optical microscopes. Tormenta is built on Python, works on multiple operating systems, and includes some key features for fluorescence nanoscopy based on single molecule localization.

  14. Electronics for the camera of the First G-APD Cherenkov Telescope (FACT) for ground based gamma-ray astronomy

    NASA Astrophysics Data System (ADS)

    Anderhub, H.; Backes, M.; Biland, A.; Boller, A.; Braun, I.; Bretz, T.; Commichau, V.; Djambazov, L.; Dorner, D.; Farnier, C.; Gendotti, A.; Grimm, O.; von Gunten, H. P.; Hildebrand, D.; Horisberger, U.; Huber, B.; Kim, K.-S.; Köhne, J.-H.; Krähenbühl, T.; Krumm, B.; Lee, M.; Lenain, J.-P.; Lorenz, E.; Lustermann, W.; Lyard, E.; Mannheim, K.; Meharga, M.; Neise, D.; Nessi-Tedaldi, F.; Overkemping, A.-K.; Pauss, F.; Renker, D.; Rhode, W.; Ribordy, M.; Rohlfs, R.; Röser, U.; Stucki, J.-P.; Thaele, J.; Tibolla, O.; Viertel, G.; Vogler, P.; Walter, R.; Warda, K.; Weitzel, Q.

    2012-01-01

    Within the FACT project, we construct a new type of camera based on Geiger-mode avalanche photodiodes (G-APDs). Compared to photomultipliers, G-APDs are more robust, need a lower operation voltage and have the potential of higher photon-detection efficiency and lower cost, but were never fully tested in the harsh environments of Cherenkov telescopes. The FACT camera consists of 1440 G-APD pixels and readout channels, based on the DRS4 (Domino Ring Sampler) analog pipeline chip and commercial Ethernet components. Preamplifiers, trigger system, digitization, slow control and power converters are integrated into the camera.

  15. Positron microanalysis with high intensity beams

    SciTech Connect

    Hulett, L.D. Jr.; Donohue, D.L.

    1990-01-01

    One of the more common applications for a high intensity slow positron facility will be microanalysis of solid materials. In the first section of this paper some examples are given of procedures that can be developed. Since most of the attendees of this workshop are experts in positron spectroscopy, comprehensive descriptions will be omitted. With the exception of positron emission microscopy, most of the procedures will be based on those already in common use with broad beams. The utility of the methods have all been demonstrated, but material scientists use very few of them because positron microbeams are not generally available. A high intensity positron facility will make microbeams easier to obtain and partially alleviate this situation. All microanalysis techniques listed below will have a common requirement, which is the ability to locate the microscopic detail or area of interest and to focus the positron beam exclusively on it. The last section of this paper is a suggestion of how a high intensity positron facility might be designed so as to have this capability built in. The method will involve locating the specimen by scanning it with the microbeam of positrons and inducing a secondary electron image that will immediately reveal whether or not the positron beam is striking the proper portion of the specimen. This scanning positron microscope' will be a somewhat prosaic analog of the conventional SEM. It will, however, be an indispensable utility that will enhance the practicality of positron microanalysis techniques. 6 refs., 1 fig.

  16. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.

    PubMed

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-09-18

    A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.

  17. Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass

    PubMed Central

    Ayoola, Idowu; Chen, Wei; Feijs, Loe

    2015-01-01

    A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600

  18. Positron trapping at grain boundaries

    SciTech Connect

    Dupasquier, A. ); Romero, R.; Somoza, A. )

    1993-10-01

    The standard positron trapping model has often been applied, as a simple approximation, to the interpretation of positron lifetime spectra in situations of diffusion-controlled trapping. This paper shows that this approximation is not sufficiently accurate, and presents a model based on the correct solution of the diffusion equation, in the version appropriate for studying positron trapping at grain boundaries. The model is used for the analysis of new experimental data on positron lifetime spectra in a fine-grained Al-Ca-Zn alloy. Previous results on similar systems are also discussed and reinterpreted. The analysis yields effective diffusion coefficients not far from the values known for the base metals of the alloys.

  19. Optimum design of the carbon fiber thin-walled baffle for the space-based camera

    NASA Astrophysics Data System (ADS)

    Yan, Yong; Song, Gu; Yuan, An; Jin, Guang

    2011-08-01

    The thin-walled baffle design of the space-based camera is an important job in the lightweight space camera research task for its stringent quality requirement and harsh mechanical environment especially for the thin-walled baffle of the carbon fiber design. In the paper, an especially thin-walled baffle of the carbon fiber design process was described and it is sound significant during the other thin-walled baffle design of the space camera. The designer obtained the design margin of the thin-walled baffle that structural stiffness and strength can tolerated belong to its development requirements through the appropriate use of the finite element analysis of the walled parameters influence sensitivity to its structural stiffness and strength. And the designer can determine the better optimization criterion of thin-walled baffle during the geometric parameter optimization process in such guiding principle. It sounds significant during the optimum design of the thin-walled baffle of the space camera. For structural stiffness and strength of the carbon fibers structure which can been designed, the effect of the optimization will be more remarkable though the optional design of the parameters chose. Combination of manufacture process and design requirements the paper completed the thin-walled baffle structure scheme selection and optimized the specific carbon fiber fabrication technology though the FEM optimization, and the processing cost and process cycle are retrenchment/saved effectively in the method. Meanwhile, the weight of the thin-walled baffle reduced significantly in meet the design requirements under the premise of the structure. The engineering prediction had been adopted, and the related result shows that the thin-walled baffle satisfied the space-based camera engineering practical needs very well, its quality reduced about 20%, the final assessment index of the thin-walled baffle were superior to the overall design requirements significantly. The design

  20. A pixellated γ-camera based on CdTe detectors clinical interests and performances

    NASA Astrophysics Data System (ADS)

    Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.

    2000-07-01

    A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an

  1. Novel fundus camera design

    NASA Astrophysics Data System (ADS)

    Dehoog, Edward A.

    A fundus camera a complex optical system that makes use of the principle of reflex free indirect ophthalmoscopy to image the retina. Despite being in existence as early as 1900's, little has changed in the design of a fundus camera and there is minimal information about the design principles utilized. Parameters and specifications involved in the design of fundus camera are determined and their affect on system performance are discussed. Fundus cameras incorporating different design methods are modeled and a performance evaluation based on design parameters is used to determine the effectiveness of each design strategy. By determining the design principles involved in the fundus camera, new cameras can be designed to include specific imaging modalities such as optical coherence tomography, imaging spectroscopy and imaging polarimetry to gather additional information about properties and structure of the retina. Design principles utilized to incorporate such modalities into fundus camera systems are discussed. Design, implementation and testing of a snapshot polarimeter fundus camera are demonstrated.

  2. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  3. A clinical gamma camera-based pinhole collimated system for high resolution small animal SPECT imaging.

    PubMed

    Mejia, J; Galvis-Alonso, O Y; Castro, A A de; Braga, J; Leite, J P; Simões, M V

    2010-12-01

    The main objective of the present study was to upgrade a clinical gamma camera to obtain high resolution tomographic images of small animal organs. The system is based on a clinical gamma camera to which we have adapted a special-purpose pinhole collimator and a device for positioning and rotating the target based on a computer-controlled step motor. We developed a software tool to reconstruct the target's three-dimensional distribution of emission from a set of planar projections, based on the maximum likelihood algorithm. We present details on the hardware and software implementation. We imaged phantoms and heart and kidneys of rats. When using pinhole collimators, the spatial resolution and sensitivity of the imaging system depend on parameters such as the detector-to-collimator and detector-to-target distances and pinhole diameter. In this study, we reached an object voxel size of 0.6 mm and spatial resolution better than 2.4 and 1.7 mm full width at half maximum when 1.5- and 1.0-mm diameter pinholes were used, respectively. Appropriate sensitivity to study the target of interest was attained in both cases. Additionally, we show that as few as 12 projections are sufficient to attain good quality reconstructions, a result that implies a significant reduction of acquisition time and opens the possibility for radiotracer dynamic studies. In conclusion, a high resolution single photon emission computed tomography (SPECT) system was developed using a commercial clinical gamma camera, allowing the acquisition of detailed volumetric images of small animal organs. This type of system has important implications for research areas such as Cardiology, Neurology or Oncology.

  4. A Probabilistic Feature Map-Based Localization System Using a Monocular Camera.

    PubMed

    Kim, Hyungjin; Lee, Donghwa; Oh, Taekjun; Choi, Hyun-Taek; Myung, Hyun

    2015-08-31

    Image-based localization is one of the most widely researched localization techniques in the robotics and computer vision communities. As enormous image data sets are provided through the Internet, many studies on estimating a location with a pre-built image-based 3D map have been conducted. Most research groups use numerous image data sets that contain sufficient features. In contrast, this paper focuses on image-based localization in the case of insufficient images and features. A more accurate localization method is proposed based on a probabilistic map using 3D-to-2D matching correspondences between a map and a query image. The probabilistic feature map is generated in advance by probabilistic modeling of the sensor system as well as the uncertainties of camera poses. Using the conventional PnP algorithm, an initial camera pose is estimated on the probabilistic feature map. The proposed algorithm is optimized from the initial pose by minimizing Mahalanobis distance errors between features from the query image and the map to improve accuracy. To verify that the localization accuracy is improved, the proposed algorithm is compared with the conventional algorithm in a simulation and realenvironments.

  5. New Stereo Vision Digital Camera System for Simultaneous Measurement of Cloud Base Height and Atmospheric Visibility

    NASA Astrophysics Data System (ADS)

    Janeiro, F. M.; Carretas, F.; Palma, N.; Ramos, P. M.; Wagner, F.

    2013-12-01

    Clouds play an important role in many aspects of everyday life. They affect both the local weather as well as the global climate and are an important parameter on climate change studies. Cloud parameters are also important for weather prediction models which make use of actual measurements. It is thus important to have low-cost instrumentation that can be deployed in the field to measure those parameters. This kind of instruments should also be automated and robust since they may be deployed in remote places and be subject to adverse weather conditions. Although clouds are very important in environmental systems, they are also an essential component of airplane safety when visual flight rules (VFR) are enforced, such as in most small aerodromes where it is not economically viable to install instruments for assisted flying. Under VFR there are strict limits on the height of the cloud base, cloud cover and atmospheric visibility that ensure the safety of the pilots and planes. Although there are instruments, available in the market, to measure those parameters, their relatively high cost makes them unavailable in many local aerodromes. In this work we present a new prototype which has been recently developed and deployed in a local aerodrome as proof of concept. It is composed by two digital cameras that capture photographs of the sky and allow the measurement of the cloud height from the parallax effect. The new developments consist on having a new geometry which allows the simultaneous measurement of cloud base height, wind speed at cloud base height and atmospheric visibility, which was not previously possible with only two cameras. The new orientation of the cameras comes at the cost of a more complex geometry to measure the cloud base height. The atmospheric visibility is calculated from the Lambert-Beer law after the measurement of the contrast between a set of dark objects and the background sky. The prototype includes the latest hardware developments that

  6. CCD-camera-based diffuse optical tomography to study ischemic stroke in preclinical rat models

    NASA Astrophysics Data System (ADS)

    Lin, Zi-Jing; Niu, Haijing; Liu, Yueming; Su, Jianzhong; Liu, Hanli

    2011-02-01

    Stroke, due to ischemia or hemorrhage, is the neurological deficit of cerebrovasculature and is the third leading cause of death in the United States. More than 80 percent of stroke patients are ischemic stroke due to blockage of artery in the brain by thrombosis or arterial embolism. Hence, development of an imaging technique to image or monitor the cerebral ischemia and effect of anti-stoke therapy is more than necessary. Near infrared (NIR) optical tomographic technique has a great potential to be utilized as a non-invasive image tool (due to its low cost and portability) to image the embedded abnormal tissue, such as a dysfunctional area caused by ischemia. Moreover, NIR tomographic techniques have been successively demonstrated in the studies of cerebro-vascular hemodynamics and brain injury. As compared to a fiberbased diffuse optical tomographic system, a CCD-camera-based system is more suitable for pre-clinical animal studies due to its simpler setup and lower cost. In this study, we have utilized the CCD-camera-based technique to image the embedded inclusions based on tissue-phantom experimental data. Then, we are able to obtain good reconstructed images by two recently developed algorithms: (1) depth compensation algorithm (DCA) and (2) globally convergent method (GCM). In this study, we will demonstrate the volumetric tomographic reconstructed results taken from tissuephantom; the latter has a great potential to determine and monitor the effect of anti-stroke therapies.

  7. Development of NEMA-based software for gamma camera quality control.

    PubMed

    Rova, Andrew; Celler, Anna; Hamarneh, Ghassan

    2008-06-01

    We have developed a cross-platform software application that implements all of the basic standardized nuclear medicine scintillation camera quality control analyses, thus serving as an independent complement to camera manufacturers' software. Our application allows direct comparison of data and statistics from different cameras through its ability to uniformly analyze a range of file types. The program has been tested using multiple gamma cameras, and its results agree with comparable analysis by the manufacturers' software.

  8. Enhancing spatial resolution of (18)F positron imaging with the Timepix detector by classification of primary fired pixels using support vector machine.

    PubMed

    Wang, Qian; Liu, Zhen; Ziegler, Sibylle I; Shi, Kuangyu

    2015-07-07

    Position-sensitive positron cameras using silicon pixel detectors have been applied for some preclinical and intraoperative clinical applications. However, the spatial resolution of a positron camera is limited by positron multiple scattering in the detector. An incident positron may fire a number of successive pixels on the imaging plane. It is still impossible to capture the primary fired pixel along a particle trajectory by hardware or to perceive the pixel firing sequence by direct observation. Here, we propose a novel data-driven method to improve the spatial resolution by classifying the primary pixels within the detector using support vector machine. A classification model is constructed by learning the features of positron trajectories based on Monte-Carlo simulations using Geant4. Topological and energy features of pixels fired by (18)F positrons were considered for the training and classification. After applying the classification model on measurements, the primary fired pixels of the positron tracks in the silicon detector were estimated. The method was tested and assessed for [(18)F]FDG imaging of an absorbing edge protocol and a leaf sample. The proposed method improved the spatial resolution from 154.6   ±   4.2 µm (energy weighted centroid approximation) to 132.3   ±   3.5 µm in the absorbing edge measurements. For the positron imaging of a leaf sample, the proposed method achieved lower root mean square error relative to phosphor plate imaging, and higher similarity with the reference optical image. The improvements of the preliminary results support further investigation of the proposed algorithm for the enhancement of positron imaging in clinical and preclinical applications.

  9. Development and characterization of a round hand-held silicon photomultiplier based gamma camera for intraoperative imaging

    PubMed Central

    Popovic, Kosta; McKisson, Jack E.; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G.; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B.

    2017-01-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively. PMID:28286345

  10. Development and characterization of a round hand-held silicon photomultiplier based gamma camera for intraoperative imaging.

    PubMed

    Popovic, Kosta; McKisson, Jack E; Kross, Brian; Lee, Seungjoon; McKisson, John; Weisenberger, Andrew G; Proffitt, James; Stolin, Alexander; Majewski, Stan; Williams, Mark B

    2014-05-01

    This paper describes the development of a hand-held gamma camera for intraoperative surgical guidance that is based on silicon photomultiplier (SiPM) technology. The camera incorporates a cerium doped lanthanum bromide (LaBr3:Ce) plate scintillator, an array of 80 SiPM photodetectors and a two-layer parallel-hole collimator. The field of view is circular with a 60 mm diameter. The disk-shaped camera housing is 75 mm in diameter, approximately 40.5 mm thick and has a mass of only 1.4 kg, permitting either hand-held or arm-mounted use. All camera components are integrated on a mobile cart that allows easy transport. The camera was developed for use in surgical procedures including determination of the location and extent of primary carcinomas, detection of secondary lesions and sentinel lymph node biopsy (SLNB). Here we describe the camera design and its principal operating characteristics, including spatial resolution, energy resolution, sensitivity uniformity, and geometric linearity. The gamma camera has an intrinsic spatial resolution of 4.2 mm FWHM, an energy resolution of 21.1 % FWHM at 140 keV, and a sensitivity of 481 and 73 cps/MBq when using the single- and double-layer collimators, respectively.

  11. Image mosaic based on the camera self-calibration of combining two vanishing points and pure rotational motion

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Wang, Junqiao; Liang, Erjun; Liu, Xiaomin; Zhao, Shujun

    2016-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, two selfcalibration methods respectively based on two vanishing points and homography are studied, and finally realizing the image mosaic based on self-calibration of the camera purely rotating around optical center. The geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to self-calibration based on two vanishing points. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taked from different viewpoints in a scene. Compared with the other selfcalibration based on homography, the method based on two vanishing points has more convenient calibration process and simple algorithm. To check the quality of the self-calibration, we create a spherical mosaic of the images that were used for the self-calibration based on homography. Compared with the experimental results of two methods respectively based on calibration plate and self-calibration method using machine vision software Halcon, the practicability and effectiveness of self-calibration respectively based on two vanishing points and homography is verified.

  12. Real-time implementation of camera positioning algorithm based on FPGA & SOPC

    NASA Astrophysics Data System (ADS)

    Yang, Mingcao; Qiu, Yuehong

    2014-09-01

    In recent years, with the development of positioning algorithm and FPGA, to achieve the camera positioning based on real-time implementation, rapidity, accuracy of FPGA has become a possibility by way of in-depth study of embedded hardware and dual camera positioning system, this thesis set up an infrared optical positioning system based on FPGA and SOPC system, which enables real-time positioning to mark points in space. Thesis completion include: (1) uses a CMOS sensor to extract the pixel of three objects with total feet, implemented through FPGA hardware driver, visible-light LED, used here as the target point of the instrument. (2) prior to extraction of the feature point coordinates, the image needs to be filtered to avoid affecting the physical properties of the system to bring the platform, where the median filtering. (3) Coordinate signs point to FPGA hardware circuit extraction, a new iterative threshold selection method for segmentation of images. Binary image is then segmented image tags, which calculates the coordinates of the feature points of the needle through the center of gravity method. (4) direct linear transformation (DLT) and extreme constraints method is applied to three-dimensional reconstruction of the plane array CMOS system space coordinates. using SOPC system on a chip here, taking advantage of dual-core computing systems, which let match and coordinate operations separately, thus increase processing speed.

  13. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  14. Modelling Positron Interactions with Matter

    NASA Astrophysics Data System (ADS)

    Garcia, G.; Petrovic, Z.; White, R.; Buckman, S.

    2011-05-01

    In this work we link fundamental measurements of positron interactions with biomolecules, with the development of computer codes for positron transport and track structure calculations. We model positron transport in a medium from a knowledge of the fundamental scattering cross section for the atoms and molecules comprising the medium, combined with a transport analysis based on statistical mechanics and Monte-Carlo techniques. The accurate knowledge of the scattering is most important at low energies, a few tens of electron volts or less. The ultimate goal of this work is to do this in soft condensed matter, with a view to ultimately developing a dosimetry model for Positron Emission Tomography (PET). The high-energy positrons first emitted by a radionuclide in PET may well be described by standard formulas for energy loss of charged particles in matter, but it is incorrect to extrapolate these formulas to low energies. Likewise, using electron cross-sections to model positron transport at these low energies has been shown to be in serious error due to the effects of positronium formation. Work was supported by the Australian Research Council, the Serbian Government, and the Ministerio de Ciencia e Innovación, Spain.

  15. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  16. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment

    PubMed Central

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-01-01

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments. PMID:27589755

  17. Radiation defects induced by helium implantation in gold-based alloys investigated by positron annihilation spectroscopy

    NASA Astrophysics Data System (ADS)

    Thome, T.; Grynszpan, R. I.

    2006-06-01

    The formation of gas bubbles in metallic materials may result in drastic degradation of in-service properties. In order to investigate this effect in high density and medium-low melting temperature ( T-M ) alloys, positron annihilation spectroscopy measurements were performed on helium-implanted gold-silver solid solutions after isochronal annealing treatments. Three recovery stages are observed, attributed to the migration and elimination of defects not stabilized by helium atoms, helium bubble nucleation and bubble growth. Similarities with other metals are found for the recovery stages involving bubble nucleation and growth processes. Lifetime measurements indicate that He implantation leads to the formation of small and over-pressurized bubbles that generate internal stresses in the material. A comprehensive picture is drawn for possible mechanisms of helium bubble evolution. Two values of activation energy (0.26 and 0.53 eV) are determined below and above 0.7 T-M , respectively, from the variation of the helium bubble radius during the bubble growth stage. The migration and coalescence mechanism, which accounts for these very low activation energies, controls the helium bubble growth.

  18. Fluorodeoxyglucose-based positron emission tomography imaging to monitor drug responses in hematological tumors.

    PubMed

    Newbold, Andrea; Martin, Ben P; Cullinane, Carleen; Bots, Michael

    2014-10-01

    Positron emission tomography (PET) can be used to monitor the uptake of the labeled glucose analog fluorodeoxyglucose (¹⁸F-FDG), a process that is generally believed to reflect viable tumor cell mass. The use of ¹⁸F-FDG PET can be helpful in documenting over time the reduction in tumor mass volume in response to anticancer drug therapy in vivo. In this protocol, we describe how to monitor the response of murine B-cell lymphomas to an inducer of apoptosis, the anticancer drug vorinostat (a histone deacetylase inhibitor). B-cell lymphoma cells are injected into recipient mice and, on tumor formation, the mice are treated with vorinostat. The tracer ¹⁸F-FDG is then injected into the mice at several time points, and its uptake is monitored using PET. Because the uptake of ¹⁸F-FDG is not a direct measure of apoptosis, an additional direct method proving that apoptotic cells are present should also be performed.

  19. Fluorodeoxyglucose-based positron emission tomography imaging to monitor drug responses in solid tumors.

    PubMed

    Newbold, Andrea; Martin, Ben P; Cullinane, Carleen; Bots, Michael

    2014-10-01

    Positron emission tomography (PET) is used to monitor the uptake of the labeled glucose analogue fluorodeoxyglucose (¹⁸F-FDG) by solid tumor cells, a process generally believed to reflect viable tumor cell mass. The use of ¹⁸F-FDG exploits the high demand for glucose in tumor cells, and serves to document over time the response of a solid tumor to an inducer of apoptosis. The apoptosis inducer crizotinib is a small-molecule inhibitor of c-Met, a receptor tyrosine kinase that is often dysregulated in human tumors. In this protocol, we describe how to monitor the response of a solid tumor to crizotinib. Human gastric tumor cells (GTL-16 cells) are injected into recipient mice and, on tumor formation, the mice are treated with crizotinib. The tracer ¹⁸F-FDG is then injected into the mice at several time points, and its uptake is monitored using PET. Because ¹⁸F-FDG uptake varies widely among different tumor models, preliminary experiments should be performed with each new model to determine its basal level of ¹⁸F-FDG uptake. Verifying that the basal level of uptake is sufficiently above background levels will assure accurate quantitation. Because ¹⁸F-FDG uptake is not a direct measure of apoptosis, it is advisable to carry out an additional direct method to show the presence of apoptotic cells.

  20. Fast time-of-flight camera based surface registration for radiotherapy patient positioning

    SciTech Connect

    Placht, Simon; Stancanello, Joseph; Schaller, Christian; Balda, Michael; Angelopoulou, Elli

    2012-01-15

    Purpose: This work introduces a rigid registration framework for patient positioning in radiotherapy, based on real-time surface acquisition by a time-of-flight (ToF) camera. Dynamic properties of the system are also investigated for future gating/tracking strategies. Methods: A novel preregistration algorithm, based on translation and rotation-invariant features representing surface structures, was developed. Using these features, corresponding three-dimensional points were computed in order to determine initial registration parameters. These parameters became a robust input to an accelerated version of the iterative closest point (ICP) algorithm for the fine-tuning of the registration result. Distance calibration and Kalman filtering were used to compensate for ToF-camera dependent noise. Additionally, the advantage of using the feature based preregistration over an ''ICP only'' strategy was evaluated, as well as the robustness of the rigid-transformation-based method to deformation. Results: The proposed surface registration method was validated using phantom data. A mean target registration error (TRE) for translations and rotations of 1.62 {+-} 1.08 mm and 0.07 deg. {+-} 0.05 deg., respectively, was achieved. There was a temporal delay of about 65 ms in the registration output, which can be seen as negligible considering the dynamics of biological systems. Feature based preregistration allowed for accurate and robust registrations even at very large initial displacements. Deformations affected the accuracy of the results, necessitating particular care in cases of deformed surfaces. Conclusions: The proposed solution is able to solve surface registration problems with an accuracy suitable for radiotherapy cases where external surfaces offer primary or complementary information to patient positioning. The system shows promising dynamic properties for its use in gating/tracking applications. The overall system is competitive with commonly-used surface

  1. Early sinkhole detection using a drone-based thermal camera and image processing

    NASA Astrophysics Data System (ADS)

    Lee, Eun Ju; Shin, Sang Young; Ko, Byoung Chul; Chang, Chunho

    2016-09-01

    Accurate advance detection of the sinkholes that are occurring more frequently now is an important way of preventing human fatalities and property damage. Unlike naturally occurring sinkholes, human-induced ones in urban areas are typically due to groundwater disturbances and leaks of water and sewage caused by large-scale construction. Although many sinkhole detection methods have been developed, it is still difficult to predict sinkholes that occur in depth areas. In addition, conventional methods are inappropriate for scanning a large area because of their high cost. Therefore, this paper uses a drone combined with a thermal far-infrared (FIR) camera to detect potential sinkholes over a large area based on computer vision and pattern classification techniques. To make a standard dataset, we dug eight holes of depths 0.5-2 m in increments of 0.5 m and with a maximum width of 1 m. We filmed these using the drone-based FIR camera at a height of 50 m. We first detect candidate regions by analysing cold spots in the thermal images based on the fact that a sinkhole typically has a lower thermal energy than its background. Then, these regions are classified into sinkhole and non-sinkhole classes using a pattern classifier. In this study, we ensemble the classification results based on a light convolutional neural network (CNN) and those based on a Boosted Random Forest (BRF) with handcrafted features. We apply the proposed ensemble method successfully to sinkhole data for various sizes and depths in different environments, and prove that the CNN ensemble and the BRF one with handcrafted features are better at detecting sinkholes than other classifiers or standalone CNN.

  2. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results.

    PubMed

    Benetazzo, Flavia; Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-09-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors.

  3. Respiratory rate detection algorithm based on RGB-D camera: theoretical background and experimental results

    PubMed Central

    Freddi, Alessandro; Monteriù, Andrea; Longhi, Sauro

    2014-01-01

    Both the theoretical background and the experimental results of an algorithm developed to perform human respiratory rate measurements without any physical contact are presented. Based on depth image sensing techniques, the respiratory rate is derived by measuring morphological changes of the chest wall. The algorithm identifies the human chest, computes its distance from the camera and compares this value with the instantaneous distance, discerning if it is due to the respiratory act or due to a limited movement of the person being monitored. To experimentally validate the proposed algorithm, the respiratory rate measurements coming from a spirometer were taken as a benchmark and compared with those estimated by the algorithm. Five tests were performed, with five different persons sat in front of the camera. The first test aimed to choose the suitable sampling frequency. The second test was conducted to compare the performances of the proposed system with respect to the gold standard in ideal conditions of light, orientation and clothing. The third, fourth and fifth tests evaluated the algorithm performances under different operating conditions. The experimental results showed that the system can correctly measure the respiratory rate, and it is a viable alternative to monitor the respiratory activity of a person without using invasive sensors. PMID:26609383

  4. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera.

    PubMed

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-06-22

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches.

  5. Retinal oximetry based on nonsimultaneous image acquisition using a conventional fundus camera.

    PubMed

    Kim, Sun Kwon; Kim, Dong Myung; Suh, Min Hee; Kim, Martha; Kim, Hee Chan

    2011-08-01

    To measure the retinal arteriole and venule oxygen saturation (SO(2)) using a conventional fundus camera, retinal oximetry based on nonsimultaneous image acquisition was developed and evaluated. Two retinal images were sequentially acquired using a conventional fundus camera with two bandpass filters (568 nm: isobestic, 600 nm: nonisobestic wavelength), one after another, instead of a built-in green filter. The images were registered to compensate for the differences caused by eye movements during the image acquisition. Retinal SO(2) was measured using two wavelength oximetry. To evaluate sensitivity of the proposed method, SO(2) in the arterioles and venules before and after inhalation of 100% O(2) were compared, respectively, in 11 healthy subjects. After inhalation of 100% O(2), SO(2) increased from 96.0 ±6.0% to 98.8% ±7.1% in the arterioles (p=0.002) and from 54.0 ±8.0% to 66.7% ±7.2% in the venules (p=0.005) (paired t-test, n=11). Reproducibility of the method was 2.6% and 5.2% in the arterioles and venules, respectively (average standard deviation of five measurements, n=11).

  6. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

  7. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  8. Indirect Correspondence-Based Robust Extrinsic Calibration of LiDAR and Camera

    PubMed Central

    Sim, Sungdae; Sock, Juil; Kwak, Kiho

    2016-01-01

    LiDAR and cameras have been broadly utilized in computer vision and autonomous vehicle applications. However, in order to convert data between the local coordinate systems, we must estimate the rigid body transformation between the sensors. In this paper, we propose a robust extrinsic calibration algorithm that can be implemented easily and has small calibration error. The extrinsic calibration parameters are estimated by minimizing the distance between corresponding features projected onto the image plane. The features are edge and centerline features on a v-shaped calibration target. The proposed algorithm contributes two ways to improve the calibration accuracy. First, we use different weights to distance between a point and a line feature according to the correspondence accuracy of the features. Second, we apply a penalizing function to exclude the influence of outliers in the calibration datasets. Additionally, based on our robust calibration approach for a single LiDAR-camera pair, we introduce a joint calibration that estimates the extrinsic parameters of multiple sensors at once by minimizing one objective function with loop closing constraints. We conduct several experiments to evaluate the performance of our extrinsic calibration algorithm. The experimental results show that our calibration method has better performance than the other approaches. PMID:27338416

  9. Ground-based analysis of volcanic ash plumes using a new multispectral thermal infrared camera approach

    NASA Astrophysics Data System (ADS)

    Williams, D.; Ramsey, M. S.

    2015-12-01

    Volcanic plumes are complex mixtures of mineral, lithic and glass fragments of varying size, together with multiple gas species. These plumes vary in size dependent on a number of factors, including vent diameter, magma composition and the quantity of volatiles within a melt. However, determining the chemical and mineralogical properties of a volcanic plume immediately after an eruption is a great challenge. Thermal infrared (TIR) satellite remote sensing of these plumes is routinely used to calculate the volcanic ash particle size variations and sulfur dioxide concentration. These analyses are commonly performed using high temporal, low spatial resolution satellites, which can only reveal large scale trends. What is lacking is a high spatial resolution study specifically of the properties of the proximal plumes. Using the emissive properties of volcanic ash, a new method has been developed to determine the plume's particle size and petrology in spaceborne and ground-based TIR data. A multispectral adaptation of a FLIR TIR camera has been developed that simulates the TIR channels found on several current orbital instruments. Using this instrument, data of volcanic plumes from Fuego and Santiaguito volcanoes in Guatemala were recently obtained Preliminary results indicate that the camera is capable of detecting silicate absorption features in the emissivity spectra over the TIR wavelength range, which can be linked to both mineral chemistry and particle size. It is hoped that this technique can be expanded to isolate different volcanic species within a plume, validate the orbital data, and ultimately to use the results to better inform eruption dynamics modelling.

  10. Calibration and disparity maps for a depth camera based on a four-lens device

    NASA Astrophysics Data System (ADS)

    Riou, Cécile; Colicchio, Bruno; Lauffenburger, Jean Philippe; Haeberlé, Olivier; Cudel, Christophe

    2015-11-01

    We propose a model of depth camera based on a four-lens device. This device is used for validating alternate approaches for calibrating multiview cameras and also for computing disparity or depth images. The calibration method arises from previous works, where principles of variable homography were extended for three-dimensional (3-D) measurement. Here, calibration is performed between two contiguous views obtained on the same image sensor. This approach leads us to propose a new approach for simplifying calibration by using the properties of the variable homography. Here, the second part addresses new principles for obtaining disparity images without any matching. A fast algorithm using a contour propagation algorithm is proposed without requiring structured or random pattern projection. These principles are proposed in a framework of quality control by vision, for inspection in natural illumination. By preserving scene photometry, some other standard controls, as for example calipers, shape recognition, or barcode reading, can be done conjointly with 3-D measurements. Approaches presented here are evaluated. First, we show that rapid calibration is relevant for devices mounted with multiple lenses. Second, synthetic and real experimentations validate our method for computing depth images.

  11. Pedestrian mobile mapping system for indoor environments based on MEMS IMU and range camera

    NASA Astrophysics Data System (ADS)

    Haala, N.; Fritsch, D.; Peter, M.; Khosravani, A. M.

    2011-12-01

    This paper describes an approach for the modeling of building interiors based on a mobile device, which integrates modules for pedestrian navigation and low-cost 3D data collection. Personal navigation is realized by a foot mounted low cost MEMS IMU, while 3D data capture for subsequent indoor modeling uses a low cost range camera, which was originally developed for gaming applications. Both steps, navigation and modeling, are supported by additional information as provided from the automatic interpretation of evacuation plans. Such emergency plans are compulsory for public buildings in a number of countries. They consist of an approximate floor plan, the current position and escape routes. Additionally, semantic information like stairs, elevators or the floor number is available. After the user has captured an image of such a floor plan, this information is made explicit again by an automatic raster-to-vector-conversion. The resulting coarse indoor model then provides constraints at stairs or building walls, which restrict the potential movement of the user. This information is then used to support pedestrian navigation by eliminating drift effects of the used low-cost sensor system. The approximate indoor building model additionally provides a priori information during subsequent indoor modeling. Within this process, the low cost range camera Kinect is used for the collection of multiple 3D point clouds, which are aligned by a suitable matching step and then further analyzed to refine the coarse building model.

  12. The new SCOS-based EGSE of the EPIC flight-spare on-ground cameras

    NASA Astrophysics Data System (ADS)

    La Palombara, N.; Abbey, A.; Insinga, F.; Calderon-Riano, P.; Martin, J.; Palazzo, M.; Poletti, M.; Sembay, S.; Vallejo, J.

    2014-07-01

    After almost 15 years since its launch, the instruments on-board the XMM-Newton observatory continue to operate smoothly. However, since the mission was originally planned for 10 years, progressive ageing and/or failures of the on-board instruments can be expected. Dealing with them could require substantial changes in the operating software and the command & telemetry database, which shall be tested with the on-ground flight-spare cameras. To this aim, the original Electrical Ground Support Equipment has been replaced with a new one based on SCOS2000, the same tool used by ESA for controlling the spacecraft. This was a demanding task, since it required both the recovery of the specialised knowledge regarding the original EGSE and need to adapt SCOS for a special use. Very recently this work has been completed by fully replacing the EGSE of one of the two cameras, which is now ready to be used by ESA. Here we describe the scope and purpose of this activity, the problems faced during its execution and the adopted solutions, and the tests performed to demonstrate the effectiveness of the new EGSE.

  13. Traffic camera system development

    NASA Astrophysics Data System (ADS)

    Hori, Toshi

    1997-04-01

    The intelligent transportation system has generated a strong need for the development of intelligent camera systems to meet the requirements of sophisticated applications, such as electronic toll collection (ETC), traffic violation detection and automatic parking lot control. In order to achieve the highest levels of accuracy in detection, these cameras must have high speed electronic shutters, high resolution, high frame rate, and communication capabilities. A progressive scan interline transfer CCD camera, with its high speed electronic shutter and resolution capabilities, provides the basic functions to meet the requirements of a traffic camera system. Unlike most industrial video imaging applications, traffic cameras must deal with harsh environmental conditions and an extremely wide range of light. Optical character recognition is a critical function of a modern traffic camera system, with detection and accuracy heavily dependent on the camera function. In order to operate under demanding conditions, communication and functional optimization is implemented to control cameras from a roadside computer. The camera operates with a shutter speed faster than 1/2000 sec. to capture highway traffic both day and night. Consequently camera gain, pedestal level, shutter speed and gamma functions are controlled by a look-up table containing various parameters based on environmental conditions, particularly lighting. Lighting conditions are studied carefully, to focus only on the critical license plate surface. A unique light sensor permits accurate reading under a variety of conditions, such as a sunny day, evening, twilight, storms, etc. These camera systems are being deployed successfully in major ETC projects throughout the world.

  14. A non-linear camera calibration with modified teaching-learning-based optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Buyang; Yang, Hua; Yang, Shuo

    2015-12-01

    In this paper, we put forward a novel approach based on hierarchical teaching-and-learning-based optimization (HTLBO) algorithm for nonlinear camera calibration. This algorithm simulates the teaching-learning ability of teachers and learners of a classroom. Different from traditional calibration approach, the proposed technique can find the nearoptimal solution without the need of accurate initial parameters estimation (with only very loose parameter bounds). With the introduction of cascade of teaching, the convergence speed is rapid and the global search ability is improved. Results from our study demonstrate the excellent performance of the proposed technique in terms of convergence, accuracy, and robustness. The HTLBO can also be used to solve many other complex non-linear calibration optimization problems for its good portability.

  15. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    PubMed

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  16. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes

    PubMed Central

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung

    2016-01-01

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156

  17. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  18. Cross-ratio-based line scan camera calibration using a planar pattern

    NASA Astrophysics Data System (ADS)

    Li, Dongdong; Wen, Gongjian; Qiu, Shaohua

    2016-01-01

    A flexible new technique is proposed to calibrate the geometric model of line scan cameras. In this technique, the line scan camera is rigidly coupled to a calibrated frame camera to establish a pair of stereo cameras. The linear displacements and rotation angles between the two cameras are fixed but unknown. This technique only requires the pair of stereo cameras to observe a specially designed planar pattern shown at a few (at least two) different orientations. At each orientation, a stereo pair is obtained including a linear array image and a frame image. Radial distortion of the line scan camera is modeled. The calibration scheme includes two stages. First, point correspondences are established from the pattern geometry and the projective invariance of cross-ratio. Second, with a two-step calibration procedure, the intrinsic parameters of the line scan camera are recovered from several stereo pairs together with the rigid transform parameters between the pair of stereo cameras. Both computer simulation and real data experiments are conducted to test the precision and robustness of the calibration algorithm, and very good calibration results have been obtained. Compared with classical techniques which use three-dimensional calibration objects or controllable moving platforms, our technique is affordable and flexible in close-range photogrammetric applications.

  19. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology

    PubMed Central

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-01-01

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40–50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production. PMID:27571078

  20. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras.

    PubMed

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-27

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems.

  1. Noctilucent clouds: modern ground-based photographic observations by a digital camera network.

    PubMed

    Dubietis, Audrius; Dalin, Peter; Balčiūnas, Ričardas; Černis, Kazimieras; Pertsev, Nikolay; Sukhodoev, Vladimir; Perminov, Vladimir; Zalcik, Mark; Zadorozhny, Alexander; Connors, Martin; Schofield, Ian; McEwan, Tom; McEachran, Iain; Frandsen, Soeren; Hansen, Ole; Andersen, Holger; Grønne, Jesper; Melnikov, Dmitry; Manevich, Alexander; Romejko, Vitaly

    2011-10-01

    Noctilucent, or "night-shining," clouds (NLCs) are a spectacular optical nighttime phenomenon that is very often neglected in the context of atmospheric optics. This paper gives a brief overview of current understanding of NLCs by providing a simple physical picture of their formation, relevant observational characteristics, and scientific challenges of NLC research. Modern ground-based photographic NLC observations, carried out in the framework of automated digital camera networks around the globe, are outlined. In particular, the obtained results refer to studies of single quasi-stationary waves in the NLC field. These waves exhibit specific propagation properties--high localization, robustness, and long lifetime--that are the essential requisites of solitary waves.

  2. A Bevel Gear Quality Inspection System Based on Multi-Camera Vision Technology.

    PubMed

    Liu, Ruiling; Zhong, Dexing; Lyu, Hongqiang; Han, Jiuqiang

    2016-08-25

    Surface defect detection and dimension measurement of automotive bevel gears by manual inspection are costly, inefficient, low speed and low accuracy. In order to solve these problems, a synthetic bevel gear quality inspection system based on multi-camera vision technology is developed. The system can detect surface defects and measure gear dimensions simultaneously. Three efficient algorithms named Neighborhood Average Difference (NAD), Circle Approximation Method (CAM) and Fast Rotation-Position (FRP) are proposed. The system can detect knock damage, cracks, scratches, dents, gibbosity or repeated cutting of the spline, etc. The smallest detectable defect is 0.4 mm × 0.4 mm and the precision of dimension measurement is about 40-50 μm. One inspection process takes no more than 1.3 s. Both precision and speed meet the requirements of real-time online inspection in bevel gear production.

  3. Automatic camera-based identification and 3-D reconstruction of electrode positions in electrocardiographic imaging.

    PubMed

    Schulze, Walther H W; Mackens, Patrick; Potyagaylo, Danila; Rhode, Kawal; Tülümen, Erol; Schimpf, Rainer; Papavassiliu, Theano; Borggrefe, Martin; Dössel, Olaf

    2014-12-01

    Electrocardiographic imaging (ECG imaging) is a method to depict electrophysiological processes in the heart. It is an emerging technology with the potential of making the therapy of cardiac arrhythmia less invasive, less expensive, and more precise. A major challenge for integrating the method into clinical workflow is the seamless and correct identification and localization of electrodes on the thorax and their assignment to recorded channels. This work proposes a camera-based system, which can localize all electrode positions at once and to an accuracy of approximately 1 ± 1 mm. A system for automatic identification of individual electrodes is implemented that overcomes the need of manual annotation. For this purpose, a system of markers is suggested, which facilitates a precise localization to subpixel accuracy and robust identification using an error-correcting code. The accuracy of the presented system in identifying and localizing electrodes is validated in a phantom study. Its overall capability is demonstrated in a clinical scenario.

  4. Body-Based Gender Recognition Using Images from Visible and Thermal Cameras

    PubMed Central

    Nguyen, Dat Tien; Park, Kang Ryoung

    2016-01-01

    Gender information has many useful applications in computer vision systems, such as surveillance systems, counting the number of males and females in a shopping mall, accessing control systems in restricted areas, or any human-computer interaction system. In most previous studies, researchers attempted to recognize gender by using visible light images of the human face or body. However, shadow, illumination, and time of day greatly affect the performance of these methods. To overcome this problem, we propose a new gender recognition method based on the combination of visible light and thermal camera images of the human body. Experimental results, through various kinds of feature extraction and fusion methods, show that our approach is efficient for gender recognition through a comparison of recognition rates with conventional systems. PMID:26828487

  5. Estimating the spatial position of marine mammals based on digital camera recordings

    PubMed Central

    Hoekendijk, Jeroen P A; de Vries, Jurre; van der Bolt, Krissy; Greinert, Jens; Brasseur, Sophie; Camphuysen, Kees C J; Aarts, Geert

    2015-01-01

    Estimating the spatial position of organisms is essential to quantify interactions between the organism and the characteristics of its surroundings, for example, predator–prey interactions, habitat selection, and social associations. Because marine mammals spend most of their time under water and may appear at the surface only briefly, determining their exact geographic location can be challenging. Here, we developed a photogrammetric method to accurately estimate the spatial position of marine mammals or birds at the sea surface. Digital recordings containing landscape features with known geographic coordinates can be used to estimate the distance and bearing of each sighting relative to the observation point. The method can correct for frame rotation, estimates pixel size based on the reference points, and can be applied to scenarios with and without a visible horizon. A set of R functions was written to process the images and obtain accurate geographic coordinates for each sighting. The method is applied to estimate the spatiotemporal fine-scale distribution of harbour porpoises in a tidal inlet. Video recordings of harbour porpoises were made from land, using a standard digital single-lens reflex (DSLR) camera, positioned at a height of 9.59 m above mean sea level. Porpoises were detected up to a distance of ∽3136 m (mean 596 m), with a mean location error of 12 m. The method presented here allows for multiple detections of different individuals within a single video frame and for tracking movements of individuals based on repeated sightings. In comparison with traditional methods, this method only requires a digital camera to provide accurate location estimates. It especially has great potential in regions with ample data on local (a)biotic conditions, to help resolve functional mechanisms underlying habitat selection and other behaviors in marine mammals in coastal areas. PMID:25691982

  6. Positron emission tomography: physics, instrumentation, and image analysis.

    PubMed

    Porenta, G

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources, PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and user-friendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center.

  7. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media.

    PubMed

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V

    2016-06-06

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ([Formula: see text]) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ([Formula: see text] ∼ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  8. Lock-in camera based heterodyne holography for ultrasound-modulated optical tomography inside dynamic scattering media

    NASA Astrophysics Data System (ADS)

    Liu, Yan; Shen, Yuecheng; Ma, Cheng; Shi, Junhui; Wang, Lihong V.

    2016-06-01

    Ultrasound-modulated optical tomography (UOT) images optical contrast deep inside scattering media. Heterodyne holography based UOT is a promising technique that uses a camera for parallel speckle detection. In previous works, the speed of data acquisition was limited by the low frame rates of conventional cameras. In addition, when the signal-to-background ratio was low, these cameras wasted most of their bits representing an informationless background, resulting in extremely low efficiencies in the use of bits. Here, using a lock-in camera, we increase the bit efficiency and reduce the data transfer load by digitizing only the signal after rejecting the background. Moreover, compared with the conventional four-frame based amplitude measurement method, our single-frame method is more immune to speckle decorrelation. Using lock-in camera based UOT with an integration time of 286 μs, we imaged an absorptive object buried inside a dynamic scattering medium exhibiting a speckle correlation time ( τ c ) as short as 26 μs. Since our method can tolerate speckle decorrelation faster than that found in living biological tissue ( τ c ˜ 100-1000 μs), it is promising for in vivo deep tissue non-invasive imaging.

  9. Ventilation/Perfusion Positron Emission Tomography—Based Assessment of Radiation Injury to Lung

    SciTech Connect

    Siva, Shankar; Hardcastle, Nicholas; Kron, Tomas; Bressel, Mathias; Callahan, Jason; MacManus, Michael P.; Shaw, Mark; Plumridge, Nikki; Hicks, Rodney J.; Steinfort, Daniel; Ball, David L.; Hofman, Michael S.

    2015-10-01

    Purpose: To investigate {sup 68}Ga-ventilation/perfusion (V/Q) positron emission tomography (PET)/computed tomography (CT) as a novel imaging modality for assessment of perfusion, ventilation, and lung density changes in the context of radiation therapy (RT). Methods and Materials: In a prospective clinical trial, 20 patients underwent 4-dimensional (4D)-V/Q PET/CT before, midway through, and 3 months after definitive lung RT. Eligible patients were prescribed 60 Gy in 30 fractions with or without concurrent chemotherapy. Functional images were registered to the RT planning 4D-CT, and isodose volumes were averaged into 10-Gy bins. Within each dose bin, relative loss in standardized uptake value (SUV) was recorded for ventilation and perfusion, and loss in air-filled fraction was recorded to assess RT-induced lung fibrosis. A dose-effect relationship was described using both linear and 2-parameter logistic fit models, and goodness of fit was assessed with Akaike Information Criterion (AIC). Results: A total of 179 imaging datasets were available for analysis (1 scan was unrecoverable). An almost perfectly linear negative dose-response relationship was observed for perfusion and air-filled fraction (r{sup 2}=0.99, P<.01), with ventilation strongly negatively linear (r{sup 2}=0.95, P<.01). Logistic models did not provide a better fit as evaluated by AIC. Perfusion, ventilation, and the air-filled fraction decreased 0.75 ± 0.03%, 0.71 ± 0.06%, and 0.49 ± 0.02%/Gy, respectively. Within high-dose regions, higher baseline perfusion SUV was associated with greater rate of loss. At 50 Gy and 60 Gy, the rate of loss was 1.35% (P=.07) and 1.73% (P=.05) per SUV, respectively. Of 8/20 patients with peritumoral reperfusion/reventilation during treatment, 7/8 did not sustain this effect after treatment. Conclusions: Radiation-induced regional lung functional deficits occur in a dose-dependent manner and can be estimated by simple linear models with 4D-V/Q PET

  10. Fluorodeoxyglucose-positron emission tomography in carcinoma nasopharynx: Can we predict outcomes and tailor therapy based on postradiotherapy fluorodeoxyglucose-positron emission tomography?

    PubMed Central

    Laskar, Sarbani Ghosh; Baijal, Gunjan; Rangarajan, Venkatesh; Purandare, Nilendu; Sengar, Manju; Shah, Sneha; Gupta, Tejpal; Budrukkar, Ashwini; Murthy, Vedang; Pai, Prathamesh S.; D’Cruz, A. K.; Agarwal, J. P.

    2016-01-01

    Background: Positron emission tomography-computed tomography (PET-CT) is an emerging modality for staging and response evaluation in carcinoma nasopharynx. This study was conducted to evaluate the impact of PET-CT in assessing response and outcomes in carcinoma nasopharynx. Materials and Methods: Forty-five patients of nonmetastatic carcinoma nasopharynx who underwent PET-CT for response evaluation at 10-12 weeks posttherapy between 2004 and 2009 were evaluated. Patients were classified as responders (Group A) if there was a complete response on PET-CT or as nonresponders (Group B) if there was any uptake above the background activity. Data regarding demographics, treatment, and outcomes were collected from their records and compared across the Groups A and B. Results: The median age was 41 years. 42 out of 45 (93.3%) patients had WHO Grade 2B disease (undifferentiated squamous carcinoma). 24.4%, 31.1%, 15.6, and 28.8% patients were in American Joint Committee on Cancer Stage IIb, III, Iva, and IVb. All patients were treated with neoadjuvant chemotherapy followed by concomitant chemoradiotherapy. Forty-five patients, 28 (62.2%) were classified as responders, whereas 17 (37.8%) were classified as nonresponders. There was no significant difference in the age, sex, WHO grade, and stage distribution between the groups. Compliance to treatment was comparable across both groups. The median follow-up was 25.3 months (759 days). The disease-free survival (DFS) of the group was 57.3% at 3 years. The DFS at 3 years was 87.3% and 19.7% for Group A and B, respectively (log-rank test, P < 0.001). Univariate and multivariate analysis revealed Groups to be the only significant factor predicting DFS (P value 0.002 and < 0.001, respectively). In Group B, the most common site of disease failure was distant (9, 53%). Conclusion: PET-CT can be used to evaluate response and as a tool to identify patients at higher risk of distant failure. Further, this could be exploited to identify

  11. Improvement of the GRACE star camera data based on the revision of the combination method

    NASA Astrophysics Data System (ADS)

    Bandikova, Tamara; Flury, Jakob

    2014-11-01

    The new release of the sensor and instrument data (Level-1B release 02) of the Gravity Recovery and Climate Experiment (GRACE) had a substantial impact on the improvement of the overall accuracy of the gravity field models. This has implied that improvements on the sensor data level can still significantly contribute to arriving closer to the GRACE baseline accuracy. The recent analysis of the GRACE star camera data (SCA1B RL02) revealed their unexpectedly higher noise. As the star camera (SCA) data are essential for the processing of the K-band ranging data and the accelerometer data, thorough investigation of the data set was needed. We fully reexamined the SCA data processing from Level-1A to Level-1B with focus on the combination method of the data delivered by the two SCA heads. In the first step, we produced and compared our own combined attitude solution by applying two different combination methods on the SCA Level-1A data. The first method introduces the information about the anisotropic accuracy of the star camera measurement in terms of a weighing matrix. This method was applied in the official processing as well. The alternative method merges only the well determined SCA boresight directions. This method was implemented on the GRACE SCA data for the first time. Both methods were expected to provide optimal solution characteristic by the full accuracy about all three axes, which was confirmed. In the second step, we analyzed the differences between the official SCA1B RL02 data generated by the Jet Propulsion Laboratory (JPL) and our solution. SCA1B RL02 contains systematically higher noise of about a factor 3-4. The data analysis revealed that the reason is the incorrect implementation of algorithms in the JPL processing routines. After correct implementation of the combination method, significant improvement within the whole spectrum was achieved. Based on these results, the official reprocessing of the SCA data is suggested, as the SCA attitude data

  12. Human Detection Based on the Generation of a Background Image and Fuzzy System by Using a Thermal Camera

    PubMed Central

    Jeon, Eun Som; Kim, Jong Hyun; Hong, Hyung Gil; Batchuluun, Ganbayar; Park, Kang Ryoung

    2016-01-01

    Recently, human detection has been used in various applications. Although visible light cameras are usually employed for this purpose, human detection based on visible light cameras has limitations due to darkness, shadows, sunlight, etc. An approach using a thermal (far infrared light) camera has been studied as an alternative for human detection, however, the performance of human detection by thermal cameras is degraded in case of low temperature differences between humans and background. To overcome these drawbacks, we propose a new method for human detection by using thermal camera images. The main contribution of our research is that the thresholds for creating the binarized difference image between the input and background (reference) images can be adaptively determined based on fuzzy systems by using the information derived from the background image and difference values between background and input image. By using our method, human area can be correctly detected irrespective of the various conditions of input and background (reference) images. For the performance evaluation of the proposed method, experiments were performed with the 15 datasets captured under different weather and light conditions. In addition, the experiments with an open database were also performed. The experimental results confirm that the proposed method can robustly detect human shapes in various environments. PMID:27043564

  13. Temperature dependent operation of PSAPD-based compact gamma camera for SPECT imaging

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2011-01-01

    We investigated the dependence of image quality on the temperature of a position sensitive avalanche photodiode (PSAPD)-based small animal single photon emission computed tomography (SPECT) gamma camera with a CsI:Tl scintillator. Currently, nitrogen gas cooling is preferred to operate PSAPDs in order to minimize the dark current shot noise. Being able to operate a PSAPD at a relatively high temperature (e.g., 5 °C) would allow a more compact and simple cooling system for the PSAPD. In our investigation, the temperature of the PSAPD was controlled by varying the flow of cold nitrogen gas through the PSAPD module and varied from −40 °C to 20 °C. Three experiments were performed to demonstrate the performance variation over this temperature range. The point spread function (PSF) of the gamma camera was measured at various temperatures, showing variation of full-width-half-maximum (FWHM) of the PSF. In addition, a 99mTc-pertechnetate (140 keV) flood source was imaged and the visibility of the scintillator segmentation (16×16 array, 8 mm × 8 mm area, 400 μm pixel size) at different temperatures was evaluated. Comparison of image quality was made at −25 °C and 5 °C using a mouse heart phantom filled with an aqueous solution of 99mTc-pertechnetate and imaged using a 0.5 mm pinhole collimator made of tungsten. The reconstructed image quality of the mouse heart phantom at 5 °C degraded in comparision to the reconstructed image quality at −25 °C. However, the defect and structure of the mouse heart phantom were clearly observed, showing the feasibility of operating PSAPDs for SPECT imaging at 5 °C, a temperature that would not need the nitrogen cooling. All PSAPD evaluations were conducted with an applied bias voltage that allowed the highest gain at a given temperature. PMID:24465051

  14. Observation of Passive and Explosive Emissions at Stromboli with a Ground-based Hyperspectral TIR Camera

    NASA Astrophysics Data System (ADS)

    Smekens, J. F.; Mathieu, G.

    2015-12-01

    Scientific imaging techniques have progressed at a fast pace in the recent years, thanks in part to great improvements in detector technology, and through our ability to process large amounts of complex data using sophisticated software. Broadband thermal cameras are ubiquitously used for permanent monitoring of volcanic activity, and have been used in a multitude of scientific applications, from tracking ballistics to studying the thermal evolution lava flow fields and volcanic plumes. In parallel, UV cameras are now used at several volcano observatories to quantify daytime sulfur dioxide (SO2) emissions at very high frequency. In this work we present the results the first deployment of a ground-based Thermal Infrared (TIR) Hyperspectral Imaging System (Telops Hyper-Cam LW) for the study of passive and explosive volcanic activity at Stromboli volcano, Italy. The instrument uses a Michelson spectrometer and Fourier Transform Infrared Spectrometry to produce hyperspectral datacubes of a scene (320x256 pixels) in the range 7.7-11.8 μm, with a spectral resolution of up to 0.25 cm-1 and at frequencies of ~10 Hz. The activity at Stromboli is characterized by explosions of small magnitude, often containing significant amounts of gas and ash, separated by periods of quiescent degassing of 10-60 minutes. With our dataset, spanning about 5 days of monitoring, we are able to detect and track temporal variations of SO2 and ash emissions during both daytime and nighttime. It ultimately allows for the quantification of the mass of gas and ash ejected during and between explosive events. Although the high price and power consumption of the instrument are obstacles to its deployment as a monitoring tool, this type of data sets offers unprecedented insight into the dynamic processes taking place at Stromboli, and could lead to a better understanding of the eruptive mechanisms at persistently active systems in general.

  15. Method for validating cloud mask obtained from satellite measurements using ground-based sky camera.

    PubMed

    Letu, Husi; Nagao, Takashi M; Nakajima, Takashi Y; Matsumae, Yoshiaki

    2014-11-01

    Error propagation in Earth's atmospheric, oceanic, and land surface parameters of the satellite products caused by misclassification of the cloud mask is a critical issue for improving the accuracy of satellite products. Thus, characterizing the accuracy of the cloud mask is important for investigating the influence of the cloud mask on satellite products. In this study, we proposed a method for validating multiwavelength satellite data derived cloud masks using ground-based sky camera (GSC) data. First, a cloud cover algorithm for GSC data has been developed using sky index and bright index. Then, Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data derived cloud masks by two cloud-screening algorithms (i.e., MOD35 and CLAUDIA) were validated using the GSC cloud mask. The results indicate that MOD35 is likely to classify ambiguous pixels as "cloudy," whereas CLAUDIA is likely to classify them as "clear." Furthermore, the influence of error propagations caused by misclassification of the MOD35 and CLAUDIA cloud masks on MODIS derived reflectance, brightness temperature, and normalized difference vegetation index (NDVI) in clear and cloudy pixels was investigated using sky camera data. It shows that the influence of the error propagation by the MOD35 cloud mask on the MODIS derived monthly mean reflectance, brightness temperature, and NDVI for clear pixels is significantly smaller than for the CLAUDIA cloud mask; the influence of the error propagation by the CLAUDIA cloud mask on MODIS derived monthly mean cloud products for cloudy pixels is significantly smaller than that by the MOD35 cloud mask.

  16. An energy-optimized collimator design for a CZT-based SPECT camera

    PubMed Central

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2015-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radio-tracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which independent of the photon energy performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial collimators

  17. An energy-optimized collimator design for a CZT-based SPECT camera

    NASA Astrophysics Data System (ADS)

    Weng, Fenghua; Bagchi, Srijeeta; Zan, Yunlong; Huang, Qiu; Seo, Youngho

    2016-01-01

    In single photon emission computed tomography, it is a challenging task to maintain reasonable performance using only one specific collimator for radiotracers over a broad spectrum of diagnostic photon energies, since photon scatter and penetration in a collimator differ with the photon energy. Frequent collimator exchanges are inevitable in daily clinical SPECT imaging, which hinders throughput while subjecting the camera to operational errors and damage. Our objective is to design a collimator, which is independent of the photon energy, performs reasonably well for commonly used radiotracers with low- to medium-energy levels of gamma emissions. Using the Geant4 simulation toolkit, we simulated and evaluated a parallel-hole collimator mounted to a CZT detector. With the pixel-geometry-matching collimation, the pitch of the collimator hole was fixed to match the pixel size of the CZT detector throughout this work. Four variables, hole shape, hole length, hole radius/width and the source-to-collimator distance were carefully studied. Scatter and penetration of the collimator, sensitivity and spatial resolution of the system were assessed for four radionuclides including 57Co, 99mTc, 123I and 111In, with respect to the aforementioned four variables. An optimal collimator was then decided upon such that it maximized the total relative sensitivity (TRS) for the four considered radionuclides while other performance parameters, such as scatter, penetration and spatial resolution, were benchmarked to prevalent commercial scanners and collimators. Digital phantom studies were also performed to validate the system with the optimal square-hole collimator (23 mm hole length, 1.28 mm hole width, and 0.32 mm septal thickness) in terms of contrast, contrast-to-noise ratio and recovery ratio. This study demonstrates promise of our proposed energy-optimized collimator to be used in a CZT-based gamma camera, with comparable or even better imaging performance versus commercial

  18. BroCam: a versatile PC-based CCD camera system

    NASA Astrophysics Data System (ADS)

    Klougart, Jens

    1995-03-01

    At the Copenhagen University, we have developed a compact CCD camera system for single and mosaic CCDs. The camera control and data acquisition is performed by a 486 type PC via a frame buffer located in one ISA-bus slot, communicating to the camera electronics on two optical fibers. The PC can run as well special purpose DOS programs, as in a more general mode under LINUX, a UNIX similar operating system. In the latter mode, standard software packages, such as SAOimage and Gnuplot, are utilized extensively thereby reducing the amount of camera specific software. At the same time the observer feels at ease with the system in an IRAF-like environment. Finally, the LINUX version enables the camera to be remotely controlled.

  19. Multi-Kinect v2 Camera Based Monitoring System for Radiotherapy Patient Safety.

    PubMed

    Santhanam, Anand P; Min, Yugang; Kupelian, Patrick; Low, Daniel

    2016-01-01

    3D kinect camera systems are essential for real-time imaging of 3D treatment space that consists of both the patient anatomy as well as the treatment equipment setup. In this paper, we present the technical details of a 3D treatment room monitoring system that employs a scalable number of calibrated and coregistered Kinect v2 cameras. The monitoring system tracks radiation gantry and treatment couch positions, and tracks the patient and immobilization accessories. The number and positions of the cameras were selected to avoid line-of-sight issues and to adequately cover the treatment setup. The cameras were calibrated with a calibration error of 0.1 mm. Our tracking system evaluation show that both gantry and patient motion could be acquired at a rate of 30 frames per second. The transformations between the cameras yielded a 3D treatment space accuracy of < 2 mm error in a radiotherapy setup within 500mm around the isocenter.

  20. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    SciTech Connect

    Ingram, S; Rao, A; Wendt, R; Castillo, R; Court, L; Yang, J; Beadle, B

    2014-06-01

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the camera by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.

  1. A study of defects in iron-based binary alloys by the Mössbauer and positron annihilation spectroscopies

    SciTech Connect

    Idczak, R. Konieczny, R.; Chojcan, J.

    2014-03-14

    The room temperature positron annihilation lifetime spectra and {sup 57}Fe Mössbauer spectra were measured for pure Fe as well as for iron-based Fe{sub 1−x}Re{sub x}, Fe{sub 1−x}Os{sub x}, Fe{sub 1−x}Mo{sub x}, and Fe{sub 1−x}Cr{sub x} solid solutions, where x is in the range between 0.01 and 0.05. The measurements were performed in order to check if the known from the literature, theoretical calculations on the interactions between vacancies and solute atoms in iron can be supported by the experimental data. The vacancies were created during formation and further mechanical processing of the iron systems under consideration so the spectra mentioned above were collected at least twice for each studied sample synthesized in an arc furnace— after cold rolling to the thickness of about 40 μm as well as after subsequent annealing at 1270 K for 2 h. It was found that only in Fe and the Fe-Cr system the isolated vacancies thermally generated at high temperatures are not observed at the room temperature and cold rolling of the materials leads to creation of another type of vacancies which were associated with edge dislocations. In the case of other cold-rolled systems, positrons detect vacancies of two types mentioned above and Mössbauer nuclei “see” the vacancies mainly in the vicinity of non-iron atoms. This speaks in favour of the suggestion that in iron matrix the solute atoms of Os, Re, and Mo interact attractively with vacancies as it is predicted by theoretical computations and the energy of the interaction is large enough for existing the pairs vacancy-solute atom at the room temperature. On the other hand, the corresponding interaction for Cr atoms is either repulsive or attractive but smaller than that for Os, Re, and Mo atoms. The latter is in agreement with the theoretical calculations.

  2. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  3. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  4. Towards the development of a SiPM-based camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Ambrosi, G.; Bissaldi, E.; Di Venere, L.; Fiandrini, E.; Giglietto, N.; Giordano, F.; Ionica, M.; Paoletti, R.; Simone, D.; Vagelli, V.

    2017-03-01

    The Italian National Institute for Nuclear Physics (INFN) is involved in the development of a prototype for a camera based on Silicon Photomultipliers (SiPMs) for the Cherenkov Telescope Array (CTA), a new generation of telescopes for ground-based gamma-ray astronomy. In this framework, an R&D program within the `Progetto Premiale TElescopi CHErenkov made in Italy (TECHE.it)' for the development of SiPMs suitable for Cherenkov light detection in the Near-Ultraviolet (NUV) has been carried out. The developed device is a NUV High-Density (NUV-HD) SiPM based on a micro cell of 30 μm × 30 μm and an area of 6 mm × 6 mm, produced by Fondazione Bruno Kessler (FBK). A full characterization of the single NUV-HD SiPM will be presented. A matrix of 8 × 8 single NUV-HD SiPMs will be part of the focal plane of the Schwarzschild- Couder Telescope prototype (pSCT) for CTA. An update on recent tests on the detectors arranged in this matrix configuration and on the front-end electronics will be given.

  5. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  6. Car speed estimation based on cross-ratio using video data of car-mounted camera (black box).

    PubMed

    Han, Inhwan

    2016-12-01

    This paper proposes several methods for using footages of car-mounted camera (car black box) to estimate the speed of the car with the camera, or the speed of other cars. This enables estimating car velocities directly from recorded footages without the need of specific physical locations of cars shown in the recorded material. To achieve this, this study collected 96 cases of black box footages and classified them for analysis based on various factors such as travel circumstances and directions. With these data, several case studies relating to speed estimation of camera-mounted car and other cars in recorded footage while the camera-mounted car is stationary, or moving, have been conducted. Additionally, a rough method for estimating the speed of other cars moving through a curvilinear path and its analysis results are described, for practical uses. Speed estimations made using cross-ratio were compared with the results of the traditional footage-analysis method and GPS calculation results for camera-mounted cars, proving its applicability.

  7. Uas Based Tree Species Identification Using the Novel FPI Based Hyperspectral Cameras in Visible, NIR and SWIR Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Näsi, R.; Honkavaara, E.; Tuominen, S.; Saari, H.; Pölönen, I.; Hakala, T.; Viljanen, N.; Soukkamäki, J.; Näkki, I.; Ojanen, H.; Reinikainen, J.

    2016-06-01

    Unmanned airborne systems (UAS) based remote sensing offers flexible tool for environmental monitoring. Novel lightweight Fabry-Perot interferometer (FPI) based, frame format, hyperspectral imaging in the spectral range from 400 to 1600 nm was used for identifying different species of trees in a forest area. To the best of the authors' knowledge, this was the first research where stereoscopic, hyperspectral VIS, NIR, SWIR data is collected for tree species identification using UAS. The first results of the analysis based on fusion of two FPI-based hyperspectral imagers and RGB camera showed that the novel FPI hyperspectral technology provided accurate geometric, radiometric and spectral information in a forested scene and is operational for environmental remote sensing applications.

  8. Instrumentation optimization for positron emission mammography

    SciTech Connect

    Moses, William W.; Qi, Jinyi

    2003-06-05

    The past several years have seen designs for PET cameras optimized to image the breast, commonly known as Positron Emission Mammography or PEM cameras. The guiding principal behind PEM instrumentation is that a camera whose field of view is restricted to a single breast has higher performance and lower cost than a conventional PET camera. The most common geometry is a pair of parallel planes of detector modules, although geometries that encircle the breast have also been proposed. The ability of the detector modules to measure the depth of interaction (DOI) is also a relevant feature. This paper finds that while both the additional solid angle coverage afforded by encircling the breast and the decreased blurring afforded by the DOI measurement improve performance, the ability to measure DOI is more important than the ability to encircle the breast.

  9. Positron spectroscopy for materials characterization

    SciTech Connect

    Schultz, P.J.; Snead, C.L. Jr.

    1988-01-01

    One of the more active areas of research on materials involves the observation and characterization of defects. The discovery of positron localization in vacancy-type defects in solids in the 1960's initiated a vast number of experimental and theoretical investigations which continue to this day. Traditional positron annihilation spectroscopic techniques, including lifetime studies, angular correlation, and Doppler broadening of annihilation radiation, are still being applied to new problems in the bulk properties of simple metals and their alloys. In addition new techniques based on tunable sources of monoenergetic positron beams have, in the last 5 years, expanded the horizons to studies of surfaces, thin films, and interfaces. In the present paper we briefly review these experimental techniques, illustrating with some of the important accomplishments of the field. 40 refs., 19 figs.

  10. Positron-alkali atom scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.; Ward, S. J.

    1990-01-01

    Positron-alkali atom scattering was recently investigated both theoretically and experimentally in the energy range from a few eV up to 100 eV. On the theoretical side calculations of the integrated elastic and excitation cross sections as well as total cross sections for Li, Na and K were based upon either the close-coupling method or the modified Glauber approximation. These theoretical results are in good agreement with experimental measurements of the total cross section for both Na and K. Resonance structures were also found in the L = 0, 1 and 2 partial waves for positron scattering from the alkalis. The structure of these resonances appears to be quite complex and, as expected, they occur in conjunction with the atomic excitation thresholds. Currently both theoretical and experimental work is in progress on positron-Rb scattering in the same energy range.

  11. Vacancy trapping by solute atoms during quenching in Cu-based dilute alloys studied by positron annihilation spectroscopy

    NASA Astrophysics Data System (ADS)

    Yabuuchi, A.; Yamamoto, Y.; Ohira, J.; Sugita, K.; Mizuno, M.; Araki, H.; Shirai, Y.

    2009-11-01

    Frozen-in vacancies and the recovery have been investigated in some Cu-based dilute alloys by using positron annihilation lifetime spectroscopy. Cu-0.5at%Sb, Cu-0.5at%Sn and Cu-0.5at%In dilute bulk alloys were quenched to ice water from 1223 K. A pure-Cu specimen was also quenched from the same temperature. As a result, no frozen-in vacancies have been detected in as-quenched pure-Cu specimen. On the other hand, as-quenched Cu-0.5at%Sb alloy contained frozen-in thermal equilibrium vacancies with concentration of 3 × 10-5. Furthermore, these frozen-in vacancies in Cu-0.5at%Sb alloy were stable until 473 K, and began to migrate at 523 K. Finally, the Cu-Sb alloy were recovered to the fully annealed state at 823 K. This thermal stability clearly implies some interaction exists between a vacancy and Sb atom and due to the interaction, thermal equilibrium vacancies are trapped by Sb atoms during quenching.

  12. Cluster analysis for identifying sub-types of tinnitus: a positron emission tomography and voxel-based morphometry study.

    PubMed

    Schecklmann, Martin; Lehner, Astrid; Poeppl, Timm B; Kreuzer, Peter M; Hajak, Göran; Landgrebe, Michael; Langguth, Berthold

    2012-11-16

    Tinnitus is a heterogeneous disorder with respect to its etiology and phenotype. Thus, the identification of sub-types implicates high relevance for treatment recommendations. For this aim, we used cluster analysis of patients for which clinical data, positron-emission tomography (PET) data and voxel-based morphometry (VBM) data were available. 44 patients with chronic tinnitus were included in this analysis. On a phenotypical level, we used tinnitus distress, duration, and laterality for clustering. To correct PET and VBM data for age, gender, and hearing, we built up a design matrix including these variables as regressors and extracted the residuals. We applied Ward's clustering method and forced cluster analysis to divide the data into two groups for both imaging and phenotypical data. On a phenotypical level the clustered groups differed only in tinnitus laterality (uni- vs. bilateral tinnitus), but not in tinnitus duration, distress, age, gender, and hearing. For grey matter volume, groups differed mainly in frontal, cingulate, temporal, and thalamic areas. For glucose metabolism, groups differed in temporal and parietal areas. The correspondence of classification was near chance level for the interrelationship of all three data set clusters. Thus, we showed that clustering according to imaging data is feasible and might depict a new approach for identifying tinnitus sub-types. However, it remains an open question to what extent the phenotypical and imaging levels may be interrelated. This article is part of a Special Issue entitled: Tinnitus Neuroscience.

  13. Research on simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Liu, Qi; Cui, Xuenan

    2014-09-01

    To satisfy the needs for testing video processor of satellite remote sensing cameras, a design is provided to achieve a simulation and verification system of satellite remote sensing camera video processor based on dual-FPGA. The correctness of video processor FPGA logic can be verified even without CCD signals or analog to digital convertor. Two Xilinx Virtex FPGAs are adopted to make a center unit, the logic of A/D digital data generating and data processing are developed with VHDL. The RS-232 interface is used to receive commands from the host computer, and different types of data are generated and outputted depending on the commands. Experimental results show that the simulation and verification system is flexible and can work well. The simulation and verification system meets the requirements of testing video processors for several different types of satellite remote sensing cameras.

  14. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera

    PubMed Central

    Wilkes, Thomas C.; McGonigle, Andrew J. S.; Pering, Tom D.; Taggart, Angus J.; White, Benjamin S.; Bryant, Robert G.; Willmott, Jon R.

    2016-01-01

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements. PMID:27782054

  15. Ultraviolet Imaging with Low Cost Smartphone Sensors: Development and Application of a Raspberry Pi-Based UV Camera.

    PubMed

    Wilkes, Thomas C; McGonigle, Andrew J S; Pering, Tom D; Taggart, Angus J; White, Benjamin S; Bryant, Robert G; Willmott, Jon R

    2016-10-06

    Here, we report, for what we believe to be the first time, on the modification of a low cost sensor, designed for the smartphone camera market, to develop an ultraviolet (UV) camera system. This was achieved via adaptation of Raspberry Pi cameras, which are based on back-illuminated complementary metal-oxide semiconductor (CMOS) sensors, and we demonstrated the utility of these devices for applications at wavelengths as low as 310 nm, by remotely sensing power station smokestack emissions in this spectral region. Given the very low cost of these units, ≈ USD 25, they are suitable for widespread proliferation in a variety of UV imaging applications, e.g., in atmospheric science, volcanology, forensics and surface smoothness measurements.

  16. Two Persons with Multiple Disabilities Use Camera-Based Microswitch Technology to Control Stimulation with Small Mouth and Eyelid Responses

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff; Lang, Russell

    2012-01-01

    Background: A camera-based microswitch technology was recently developed to monitor small facial responses of persons with multiple disabilities and allow those responses to control environmental stimulation. This study assessed such a technology with 2 new participants using slight variations of previous responses. Method: The technology involved…

  17. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  18. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  19. Fabrication and Characterization of 640x486 GaAs Based Quantum Well Infrared Photodetector (QWIP) Snapshot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S. D.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Carralejo, R.; Shott, C. A.; Maker, P. D.; Miller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long wavelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NE(delta)T, uniformity, and operability.

  20. Automated Ground-based Time-lapse Camera Monitoring of West Greenland ice sheet outlet Glaciers: Challenges and Solutions

    NASA Astrophysics Data System (ADS)

    Ahn, Y.; Box, J. E.; Balog, J.; Lewinter, A.

    2008-12-01

    Monitoring Greenland outlet glaciers using remotely sensed data has drawn a great attention in earth science communities for decades and time series analysis of sensory data has provided important variability information of glacier flow by detecting speed and thickness changes, tracking features and acquiring model input. Thanks to advancements of commercial digital camera technology and increased solid state storage, we activated automatic ground-based time-lapse camera stations with high spatial/temporal resolution in west Greenland outlet and collected one-hour interval data continuous for more than one year at some but not all sites. We believe that important information of ice dynamics are contained in these data and that terrestrial mono-/stereo-photogrammetry can provide theoretical/practical fundamentals in data processing along with digital image processing techniques. Time-lapse images over periods in west Greenland indicate various phenomenon. Problematic is rain, snow, fog, shadows, freezing of water on camera enclosure window, image over-exposure, camera motion, sensor platform drift, and fox chewing of instrument cables, and the pecking of plastic window by ravens. Other problems include: feature identification, camera orientation, image registration, feature matching in image pairs, and feature tracking. Another obstacle is that non-metric digital camera contains large distortion to be compensated for precise photogrammetric use. Further, a massive number of images need to be processed in a way that is sufficiently computationally efficient. We meet these challenges by 1) identifying problems in possible photogrammetric processes, 2) categorizing them based on feasibility, and 3) clarifying limitation and alternatives, while emphasizing displacement computation and analyzing regional/temporal variability. We experiment with mono and stereo photogrammetric techniques in the aide of automatic correlation matching for efficiently handling the enormous

  1. Secondary caries detection with a novel fluorescence-based camera system in vitro

    NASA Astrophysics Data System (ADS)

    Brede, Olivier; Wilde, Claudia; Krause, Felix; Frentzen, Matthias; Braun, Andreas

    2010-02-01

    The aim of the study was to assess the ability of a fluorescence based optical system to detect secondary caries. The optical detecting system (VistaProof) illuminates the tooth surfaces with blue light emitted by high power GaN-LEDs at 405 nm. Employing this almost monochromatic excitation, fluorescence is analyzed using a RGB camera chip and encoded in color graduations (blue - red - orange - yellow) by a software (DBSWIN), indicating the degree of caries destruction. 31 freshly extracted teeth with existing fillings and secondary caries were cleaned, excavated and refilled with the same kind of restorative material. 19 of them were refilled with amalgam, 12 were refilled with a composite resin. Each step was analyzed with the respective software and analyzed statistically. Differences were considered as statistically significant at p<0.05. There was no difference between measurements at baseline and after cleaning (Mann Whitney, p>0.05). There was a significant difference between baseline measurements of the teeth primarily filled with composite resins and the refilled situation (p=0.014). There was also a significant difference between the non-excavated and the excavated group (Composite p=0.006, Amalgam p=0.018). The in vitro study showed, that the fluorescence based system allows detecting secondary caries next to composite resin fillings but not next to amalgam restorations. Cleaning of the teeth is not necessary, if there is no visible plaque. Further studies have to show, whether the system shows the same promising results in vivo.

  2. A high-sensitivity 2x2 multi-aperture color camera based on selective averaging

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Kagawa, Keiichiro; Takasawa, Taishi; Seo, Min-Woong; Yasutomi, Keita; Kawahito, Shoji

    2015-03-01

    To demonstrate the low-noise performance of the multi-aperture imaging system using a selective averaging method, an ultra-high-sensitivity multi-aperture color camera with 2×2 apertures is being developed. In low-light conditions, random telegraph signal (RTS) noise and dark current white defects become visible, which greatly degrades the quality of the image. To reduce these kinds of noise as well as to increase the number of incident photons, the multi-aperture imaging system composed of an array of lens and CMOS image sensor (CIS), and the selective averaging for minimizing the synthetic sensor noise at every pixel is utilized. It is verified by simulation that the effective noise at the peak of noise histogram is reduced from 1.44 e- to 0.73 e- in a 2×2-aperture system, where RTS noise and dark current white defects have been successfully removed. In this work, a prototype based on low-noise color sensors with 1280×1024 pixels fabricated in 0.18um CIS technology is considered. The pixel pitch is 7.1μm×7.1μm. The noise of the sensor is around 1e- based on the folding-integration and cyclic column ADCs, and the low voltage differential signaling (LVDS) is used to improve the noise immunity. The synthetic F-number of the prototype is 0.6.

  3. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera.

    PubMed

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-02-04

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots.

  4. A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    PubMed Central

    Takizawa, Hotaka; Orita, Kazunori; Aoyagi, Mayumi; Ezaki, Nobuo; Mizuno, Shinji

    2017-01-01

    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots. PMID:28165403

  5. ATLAAS: an automatic decision tree-based learning algorithm for advanced image segmentation in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Berthon, Beatrice; Marshall, Christopher; Evans, Mererid; Spezi, Emiliano

    2016-07-01

    Accurate and reliable tumour delineation on positron emission tomography (PET) is crucial for radiotherapy treatment planning. PET automatic segmentation (PET-AS) eliminates intra- and interobserver variability, but there is currently no consensus on the optimal method to use, as different algorithms appear to perform better for different types of tumours. This work aimed to develop a predictive segmentation model, trained to automatically select and apply the best PET-AS method, according to the tumour characteristics. ATLAAS, the automatic decision tree-based learning algorithm for advanced segmentation is based on supervised machine learning using decision trees. The model includes nine PET-AS methods and was trained on a 100 PET scans with known true contour. A decision tree was built for each PET-AS algorithm to predict its accuracy, quantified using the Dice similarity coefficient (DSC), according to the tumour volume, tumour peak to background SUV ratio and a regional texture metric. The performance of ATLAAS was evaluated for 85 PET scans obtained from fillable and printed subresolution sandwich phantoms. ATLAAS showed excellent accuracy across a wide range of phantom data and predicted the best or near-best segmentation algorithm in 93% of cases. ATLAAS outperformed all single PET-AS methods on fillable phantom data with a DSC of 0.881, while the DSC for H&N phantom data was 0.819. DSCs higher than 0.650 were achieved in all cases. ATLAAS is an advanced automatic image segmentation algorithm based on decision tree predictive modelling, which can be trained on images with known true contour, to predict the best PET-AS method when the true contour is unknown. ATLAAS provides robust and accurate image segmentation with potential applications to radiation oncology.

  6. Positron beam studies of transients in semiconductors

    NASA Astrophysics Data System (ADS)

    Beling, C. D.; Ling, C. C.; Cheung, C. K.; Naik, P. S.; Zhang, J. D.; Fung, S.

    2006-02-01

    Vacancy-sensing positron deep level transient spectroscopy (PDLTS) is a positron beam-based technique that seeks to provide information on the electronic ionization levels of vacancy defects probed by the positron through the monitoring of thermal transients. The experimental discoveries leading to the concept of vacancy-sensing PDLTS are first reviewed. The major problem associated with this technique is discussed, namely the strong electric fields establish in the near surface region of the sample during the thermal transient which tend to sweep positrons into the contact with negligible defect trapping. New simulations are presented which suggest that under certain conditions a sufficient fraction of positrons may be trapped into ionizing defects rendering PDLTS technique workable. Some suggestions are made for techniques that might avoid the problematic electric field problem, such as optical-PDLTS where deep levels are populated using light and the use of high forward bias currents for trap filling.

  7. Undulator-Based Production of Polarized Positrons, A Proposal for the 50-GeV Beam in the FFTB

    SciTech Connect

    G. Alexander; P. Anthony; V. Bharadwaj; Yu.K. Batygin; T. Behnke; S. Berridge; G.R. Bower; W. Bugg; R. Carr; E. Chudakov; J.E. Clendenin; F.J. Decker; Yu. Efremenko; T. Fieguth; K. Flottmann; M. Fukuda; V. Gharibyan; T. Handler; T. Hirose; R.H. Iverson; Yu. Kamyshkov; H. Kolanoski; T. Lohse; Chang-guo Lu; K.T. McDonald; N. Meyners; R. Michaels; A.A. Mikhailichenko; K. Monig; G. Moortgat-Pick; M. Olson; T. Omori; D. Onoprienko; N. Pavel; R. Pitthan; M. Purohit; L. Rinolfi; K.P. Schuler; J.C. Sheppard; S. Spanier; A. Stahl; Z.M. Szalata; J. Turner; D. Walz; A. Weidemann; J. Weisend

    2003-06-01

    The full exploitation of the physics potential of future linear colliders such as the JLC, NLC, and TESLA will require the development of polarized positron beams. In the proposed scheme of Balakin and Mikhailichenko [1] a helical undulator is employed to generate photons of several MeV with circular polarization which are then converted in a relatively thin target to generate longitudinally polarized positrons. This experiment, E-166, proposes to test this scheme to determine whether such a technique can produce polarized positron beams of sufficient quality for use in future linear colliders. The experiment will install a meter-long, short-period, pulsed helical undulator in the Final Focus Test Beam (FFTB) at SLAC. A low-emittance 50-GeV electron beam passing through this undulator will generate circularly polarized photons with energies up to 10 MeV. These polarized photons are then converted to polarized positrons via pair production in thin targets. Titanium and tungsten targets, which are both candidates for use in linear colliders, will be tested. The experiment will measure the flux and polarization of the undulator photons, and the spectrum and polarization of the positrons produced in the conversion target, and compare the measurement results to simulations. Thus the proposed experiment directly tests for the first time the validity of the simulation programs used for the physics of polarized pair production in finite matter, in particular the effects of multiple scattering on polarization. Successful comparison of the experimental results to the simulations will lead to greater confidence in the proposed designs of polarized positrons sources for the next generation of linear colliders. This experiment requests six-weeks of time in the FFTB beam line: three weeks for installation and setup and three weeks of beam for data taking. A 50-GeV beam with about twice the SLC emittance at a repetition rate of 30 Hz is required.

  8. Positron annihilation spectroscopy on a beam of positrons the LEPTA facility

    NASA Astrophysics Data System (ADS)

    Ahmanova, E. V.; Eseev, M. K.; Kobets, A. G.; Meshkov, I. N.; Orlov, O. S.; Sidorin, A. A.; Siemek, K.; Horodek, P.

    2016-12-01

    The results and possibilities of the samples surfaces research by the Doppler method of positron annihilation spectroscopy (PAS) for a monochromatic beam of positrons at the LEPTA facility are presented in this paper. Method with high-resolution sensitivity to defects like vacancies and dislocations allows scanning of the surface and near-surface sample layers to a depth of several micrometers by the method of Doppler broadening of annihilation lines. The opportunities for the development of a PAS method based on the measurement of the positron lifetime in the sample irradiated by ordered flow of positrons from the injector of accelerator complex LEPTA at JINR are discussed.

  9. Evaluation of a CdTe semiconductor based compact gamma camera for sentinel lymph node imaging

    SciTech Connect

    Russo, Paolo; Curion, Assunta S.; Mettivier, Giovanni; Esposito, Michela; Aurilio, Michela; Caraco, Corradina; Aloj, Luigi; Lastoria, Secondo

    2011-03-15

    Purpose: The authors assembled a prototype compact gamma-ray imaging probe (MediPROBE) for sentinel lymph node (SLN) localization. This probe is based on a semiconductor pixel detector. Its basic performance was assessed in the laboratory and clinically in comparison with a conventional gamma camera. Methods: The room-temperature CdTe pixel detector (1 mm thick) has 256x256 square pixels arranged with a 55 {mu}m pitch (sensitive area 14.08x14.08 mm{sup 2}), coupled pixel-by-pixel via bump-bonding to the Medipix2 photon-counting readout CMOS integrated circuit. The imaging probe is equipped with a set of three interchangeable knife-edge pinhole collimators (0.94, 1.2, or 2.1 mm effective diameter at 140 keV) and its focal distance can be regulated in order to set a given field of view (FOV). A typical FOV of 70 mm at 50 mm skin-to-collimator distance corresponds to a minification factor 1:5. The detector is operated at a single low-energy threshold of about 20 keV. Results: For {sup 99m}Tc, at 50 mm distance, a background-subtracted sensitivity of 6.5x10{sup -3} cps/kBq and a system spatial resolution of 5.5 mm FWHM were obtained for the 0.94 mm pinhole; corresponding values for the 2.1 mm pinhole were 3.3x10{sup -2} cps/kBq and 12.6 mm. The dark count rate was 0.71 cps. Clinical images in three patients with melanoma indicate detection of the SLNs with acquisition times between 60 and 410 s with an injected activity of 26 MBq {sup 99m}Tc and prior localization with standard gamma camera lymphoscintigraphy. Conclusions: The laboratory performance of this imaging probe is limited by the pinhole collimator performance and the necessity of working in minification due to the limited detector size. However, in clinical operative conditions, the CdTe imaging probe was effective in detecting SLNs with adequate resolution and an acceptable sensitivity. Sensitivity is expected to improve with the future availability of a larger CdTe detector permitting operation at shorter

  10. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  11. Hyperspectral characterization of fluorophore diffusion in human skin using a sCMOS based hyperspectral camera

    NASA Astrophysics Data System (ADS)

    Hernandez-Palacios, J.; Haug, I. J.; Grimstad, Ø.; Randeberg, L. L.

    2011-07-01

    Hyperspectral fluorescence imaging is a modality combining high spatial and spectral resolution with increased sensitivity for low photon counts. The main objective of the current study was to investigate if this technique is a suitable tool for characterization of diffusion properties in human skin. This was done by imaging fluorescence from Alexa 488 in ex vivo human skin samples using an sCMOS based hyperspectral camera. Pre-treatment with acetone, DMSO and mechanical micro-needling of the stratum corneum created variation in epidermal permeability between the measured samples. Selected samples were also stained using fluorescence labelled biopolymers. The effect of fluorescence enhancers on transdermal diffusion could be documented from the collected data. Acetone was found to have an enhancing effect on the transport, and the results indicate that the biopolymers might have a similar effect, The enhancement from these compounds were not as prominent as the effect of mechanical penetration of the sample using a micro-needling device. Hyperspectral fluorescence imaging has thus been proven to be an interesting tool for characterization of fluorophore diffusion in ex vivo skin samples. Further work will include repetition of the measurements in a shorter time scale and mathematical modeling of the diffusion process to determine the diffusivity in skin for the compounds in question.

  12. Potential of Uav-Based Laser Scanner and Multispectral Camera Data in Building Inspection

    NASA Astrophysics Data System (ADS)

    Mader, D.; Blaskow, R.; Westfeld, P.; Weller, C.

    2016-06-01

    Conventional building inspection of bridges, dams or large constructions in general is rather time consuming and often cost expensive due to traffic closures and the need of special heavy vehicles such as under-bridge inspection units or other large lifting platforms. In consideration that, an unmanned aerial vehicle (UAV) will be more reliable and efficient as well as less expensive and simpler to operate. The utilisation of UAVs as an assisting tool in building inspections is obviously. Furthermore, light-weight special sensors such as infrared and thermal cameras as well as laser scanner are available and predestined for usage on unmanned aircraft systems. Such a flexible low-cost system is realized in the ADFEX project with the goal of time-efficient object exploration, monitoring and damage detection. For this purpose, a fleet of UAVs, equipped with several sensors for navigation, obstacle avoidance and 3D object-data acquisition, has been developed and constructed. This contribution deals with the potential of UAV-based data in building inspection. Therefore, an overview of the ADFEX project, sensor specifications and requirements of building inspections in general are given. On the basis of results achieved in practical studies, the applicability and potential of the UAV system in building inspection will be presented and discussed.

  13. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    PubMed Central

    Furferi, Rocco; Governi, Lapo; Volpe, Yary; Carfagni, Monica

    2016-01-01

    One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time. PMID:27589765

  14. Design Considerations for the Next-Generation MAPMT-Based Monolithic Scintillation Camera

    PubMed Central

    Salçın, Esen; Barber, H. Bradford; Furenlid, Lars R.

    2015-01-01

    Multi-anode photomultiplier tubes (MAPMTs) offer high spatial resolution with their small size anodes that may range from 64 to 1024 in number per tube. In order to increase detector size, MAPMT modules can be arranged in arrays and combined in a single modular scintillation camera. However, then the large number of channels that require amplification and digitization become practically not feasible unless signals are combined or reduced in some manner. Conventional approaches use resistive charge division readouts with a centroid algorithm (or a variant of it) for simplicity in the electronic circuitry implementation and fast execution. However, coupling signals from many anodes may cause significant information loss and limit achievable resolution. In this study, a new approach for optimizing readout-electronics design for MAPMTs based on an analysis of information content in the signals is presented. An adaptive read-out scheme to be used with maximum-likelihood estimation methods is proposed. This scheme achieves precision in estimating event parameters that is close to what is achieved by retaining all signals. PMID:26347497

  15. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  16. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    DOE PAGES

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; ...

    2016-02-15

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sources usingmore » a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.« less

  17. Mini Compton Camera Based on an Array of Virtual Frisch-Grid CdZnTe Detectors

    SciTech Connect

    Lee, Wonho; Bolotnikov, Aleksey; Lee, Taewoong; Camarda, Giuseppe; Cui, Yonggang; Gul, Rubi; Hossain, Anwar; Utpal, Roy; Yang, Ge; James, Ralph

    2016-02-15

    In this study, we constructed a mini Compton camera based on an array of CdZnTe detectors and assessed its spectral and imaging properties. The entire array consisted of 6×6 Frisch-grid CdZnTe detectors, each with a size of 6×6 ×15 mm3. Since it is easier and more practical to grow small CdZnTe crystals rather than large monolithic ones, constructing a mosaic array of parallelepiped crystals can be an effective way to build a more efficient, large-volume detector. With the fully operational CdZnTe array, we measured the energy spectra for 133Ba -, 137Cs -, 60Co-radiation sources; we also located these sources using a Compton imaging approach. Although the Compton camera was small enough to hand-carry, its intrinsic efficiency was several orders higher than those generated in previous researches using spatially separated arrays, because our camera measured the interactions inside the CZT detector array, wherein the detector elements were positioned very close to each other. Lastly, the performance of our camera was compared with that based on a pixelated detector.

  18. Calibration of a dual-PTZ-camera system for stereo vision based on parallel particle swarm optimization method

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Wang, Huai-Ming; Lee, Shih-Tseng; Wu, Chieh-Tsai; Hsu, Ming-Hsi

    2014-02-01

    This work investigates the calibration of a stereo vision system based on two PTZ (Pan-Tilt-Zoom) cameras. As the accuracy of the system depends not only on intrinsic parameters, but also on the geometric relationships between rotation axes of the cameras, the major concern is the development of an effective and systematic way to obtain these relationships. We derived a complete geometric model of the dual-PTZ-camera system and proposed a calibration procedure for the intrinsic and external parameters of the model. The calibration method is based on Zhang's approach using an augmented checkerboard composed of eight small checkerboards, and is formulated as an optimization problem to be solved by an improved particle swarm optimization (PSO) method. Two Sony EVI-D70 PTZ cameras were used for the experiments. The root-mean-square errors (RMSE) of corner distances in the horizontal and vertical direction are 0.192 mm and 0.115 mm, respectively. The RMSE of overlapped points between the small checkerboards is 1.3958 mm.

  19. Generalized free-space diffuse photon transport model based on the influence analysis of a camera lens diaphragm.

    PubMed

    Chen, Xueli; Gao, Xinbo; Qu, Xiaochao; Chen, Duofang; Ma, Xiaopeng; Liang, Jimin; Tian, Jie

    2010-10-10

    The camera lens diaphragm is an important component in a noncontact optical imaging system and has a crucial influence on the images registered on the CCD camera. However, this influence has not been taken into account in the existing free-space photon transport models. To model the photon transport process more accurately, a generalized free-space photon transport model is proposed. It combines Lambertian source theory with analysis of the influence of the camera lens diaphragm to simulate photon transport process in free space. In addition, the radiance theorem is also adopted to establish the energy relationship between the virtual detector and the CCD camera. The accuracy and feasibility of the proposed model is validated with a Monte-Carlo-based free-space photon transport model and physical phantom experiment. A comparison study with our previous hybrid radiosity-radiance theorem based model demonstrates the improvement performance and potential of the proposed model for simulating photon transport process in free space.

  20. Underwater camera with depth measurement

    NASA Astrophysics Data System (ADS)

    Wang, Wei-Chih; Lin, Keng-Ren; Tsui, Chi L.; Schipf, David; Leang, Jonathan

    2016-04-01

    The objective of this study is to develop an RGB-D (video + depth) camera that provides three-dimensional image data for use in the haptic feedback of a robotic underwater ordnance recovery system. Two camera systems were developed and studied. The first depth camera relies on structured light (as used by the Microsoft Kinect), where the displacement of an object is determined by variations of the geometry of a projected pattern. The other camera system is based on a Time of Flight (ToF) depth camera. The results of the structural light camera system shows that the camera system requires a stronger light source with a similar operating wavelength and bandwidth to achieve a desirable working distance in water. This approach might not be robust enough for our proposed underwater RGB-D camera system, as it will require a complete re-design of the light source component. The ToF camera system instead, allows an arbitrary placement of light source and camera. The intensity output of the broadband LED light source in the ToF camera system can be increased by putting them into an array configuration and the LEDs can be modulated comfortably with any waveform and frequencies required by the ToF camera. In this paper, both camera were evaluated and experiments were conducted to demonstrate the versatility of the ToF camera.

  1. Automated cloud classification using a ground based infra-red camera and texture analysis techniques

    NASA Astrophysics Data System (ADS)

    Rumi, Emal; Kerr, David; Coupland, Jeremy M.; Sandford, Andrew P.; Brettle, Mike J.

    2013-10-01

    Clouds play an important role in influencing the dynamics of local and global weather and climate conditions. Continuous monitoring of clouds is vital for weather forecasting and for air-traffic control. Convective clouds such as Towering Cumulus (TCU) and Cumulonimbus clouds (CB) are associated with thunderstorms, turbulence and atmospheric instability. Human observers periodically report the presence of CB and TCU clouds during operational hours at airports and observatories; however such observations are expensive and time limited. Robust, automatic classification of cloud type using infrared ground-based instrumentation offers the advantage of continuous, real-time (24/7) data capture and the representation of cloud structure in the form of a thermal map, which can greatly help to characterise certain cloud formations. The work presented here utilised a ground based infrared (8-14 μm) imaging device mounted on a pan/tilt unit for capturing high spatial resolution sky images. These images were processed to extract 45 separate textural features using statistical and spatial frequency based analytical techniques. These features were used to train a weighted k-nearest neighbour (KNN) classifier in order to determine cloud type. Ground truth data were obtained by inspection of images captured simultaneously from a visible wavelength colour camera at the same installation, with approximately the same field of view as the infrared device. These images were classified by a trained cloud observer. Results from the KNN classifier gave an encouraging success rate. A Probability of Detection (POD) of up to 90% with a Probability of False Alarm (POFA) as low as 16% was achieved.

  2. Positron-rubidium scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.

    1990-01-01

    A 5-state close-coupling calculation (5s-5p-4d-6s-6p) was carried out for positron-Rb scattering in the energy range 3.7 to 28.0 eV. In contrast to the results of similar close-coupling calculations for positron-Na and positron-K scattering the (effective) total integrated cross section has an energy dependence which is contrary to recent experimental measurements.

  3. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  4. Human detection based on the generation of a background image by using a far-infrared light camera.

    PubMed

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-03-19

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  5. Human Detection Based on the Generation of a Background Image by Using a Far-Infrared Light Camera

    PubMed Central

    Jeon, Eun Som; Choi, Jong-Suk; Lee, Ji Hoon; Shin, Kwang Yong; Kim, Yeong Gon; Le, Toan Thanh; Park, Kang Ryoung

    2015-01-01

    The need for computer vision-based human detection has increased in fields, such as security, intelligent surveillance and monitoring systems. However, performance enhancement of human detection based on visible light cameras is limited, because of factors, such as nonuniform illumination, shadows and low external light in the evening and night. Consequently, human detection based on thermal (far-infrared light) cameras has been considered as an alternative. However, its performance is influenced by the factors, such as low image resolution, low contrast and the large noises of thermal images. It is also affected by the high temperature of backgrounds during the day. To solve these problems, we propose a new method for detecting human areas in thermal camera images. Compared to previous works, the proposed research is novel in the following four aspects. One background image is generated by median and average filtering. Additional filtering procedures based on maximum gray level, size filtering and region erasing are applied to remove the human areas from the background image. Secondly, candidate human regions in the input image are located by combining the pixel and edge difference images between the input and background images. The thresholds for the difference images are adaptively determined based on the brightness of the generated background image. Noise components are removed by component labeling, a morphological operation and size filtering. Third, detected areas that may have more than two human regions are merged or separated based on the information in the horizontal and vertical histograms of the detected area. This procedure is adaptively operated based on the brightness of the generated background image. Fourth, a further procedure for the separation and removal of the candidate human regions is performed based on the size and ratio of the height to width information of the candidate regions considering the camera viewing direction and perspective

  6. Dry imaging cameras.

    PubMed

    Indrajit, Ik; Alam, Aftab; Sahni, Hirdesh; Bhatia, Mukul; Sahu, Samaresh

    2011-04-01

    Dry imaging cameras are important hard copy devices in radiology. Using dry imaging camera, multiformat images of digital modalities in radiology are created from a sealed unit of unexposed films. The functioning of a modern dry camera, involves a blend of concurrent processes, in areas of diverse sciences like computers, mechanics, thermal, optics, electricity and radiography. Broadly, hard copy devices are classified as laser and non laser based technology. When compared with the working knowledge and technical awareness of different modalities in radiology, the understanding of a dry imaging camera is often superficial and neglected. To fill this void, this article outlines the key features of a modern dry camera and its important issues that impact radiology workflow.

  7. Preliminary considerations of an intense slow positron facility based on a sup 78 Kr loop in the high flux isotopes reactor

    SciTech Connect

    Hulett, L.D. Jr.; Donohue, D.L.; Peretz, F.J.; Montgomery, B.H.; Hayter, J.B.

    1990-01-01

    Suggestions have been made to the National Steering Committee for the Advanced Neutron Source (ANS) by Mills that provisions be made to install a high intensity slow positron facility, based on a {sup 78}Kr loop, that would be available to the general community of scientists interested in this field. The flux of thermal neutrons calculated for the ANS is E + 15 sec{sup {minus}1} m{sup {minus}2}, which Mills has estimated will produce 5 mm beam of slow positrons having a current of about 1 E + 12 sec {sup {minus}1}. The intensity of such a beam will be a least 3 orders of magnitude greater than those presently available. The construction of the ANS is not anticipated to be complete until the year 2000. In order to properly plan the design of the ANS, strong considerations are being given to a proof-of-principle experiment, using the presently available High Flux Isotopes Reactor, to test the {sup 78}Kr loop technique. The positron current from the HFIR facility is expected to be about 1 E + 10 sec{sup {minus}1}, which is 2 orders of magnitude greater than any other available. If the experiment succeeds, a very valuable facility will be established, and important formation will be generated on how the ANS should be designed. 3 refs., 1 fig.

  8. Study of material properties important for an optical property modulation-based radiation detection method for positron emission tomography.

    PubMed

    Tao, Li; Daghighian, Henry M; Levin, Craig S

    2017-01-01

    We compare the performance of two detector materials, cadmium telluride (CdTe) and bismuth silicon oxide (BSO), for optical property modulation-based radiation detection method for positron emission tomography (PET), which is a potential new direction to dramatically improve the annihilation photon pair coincidence time resolution. We have shown that the induced current flow in the detector crystal resulting from ionizing radiation determines the strength of optical modulation signal. A larger resistivity is favorable for reducing the dark current (noise) in the detector crystal, and thus the higher resistivity BSO crystal has a lower (50% lower on average) noise level than CdTe. The CdTe and BSO crystals can achieve the same sensitivity under laser diode illumination at the same crystal bias voltage condition while the BSO crystal is not as sensitive to 511-keV photons as the CdTe crystal under the same crystal bias voltage. The amplitude of the modulation signal induced by 511-keV photons in BSO crystal is around 30% of that induced in CdTe crystal under the same bias condition. In addition, we have found that the optical modulation strength increases linearly with crystal bias voltage before saturation. The modulation signal with CdTe tends to saturate at bias voltages higher than 1500 V due to its lower resistivity (thus larger dark current) while the modulation signal strength with BSO still increases after 3500 V. Further increasing the bias voltage for BSO could potentially further enhance the modulation strength and thus, the sensitivity.

  9. The DSLR Camera

    NASA Astrophysics Data System (ADS)

    Berkó, Ernő; Argyle, R. W.

    Cameras have developed significantly in the past decade; in particular, digital Single-Lens Reflex Cameras (DSLR) have appeared. As a consequence we can buy cameras of higher and higher pixel number, and mass production has resulted in the great reduction of prices. CMOS sensors used for imaging are increasingly sensitive, and the electronics in the cameras allows images to be taken with much less noise. The software background is developing in a similar way—intelligent programs are created for after-processing and other supplementary works. Nowadays we can find a digital camera in almost every household, most of these cameras are DSLR ones. These can be used very well for astronomical imaging, which is nicely demonstrated by the amount and quality of the spectacular astrophotos appearing in different publications. These examples also show how much post-processing software contributes to the rise in the standard of the pictures. To sum up, the DSLR camera serves as a cheap alternative for the CCD camera, with somewhat weaker technical characteristics. In the following, I will introduce how we can measure the main parameters (position angle and separation) of double stars, based on the methods, software and equipment I use. Others can easily apply these for their own circumstances.

  10. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    NASA Astrophysics Data System (ADS)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  11. Human Visual System-Based Fundus Image Quality Assessment of Portable Fundus Camera Photographs.

    PubMed

    Wang, Shaoze; Jin, Kai; Lu, Haitong; Cheng, Chuming; Ye, Juan; Qian, Dahong

    2016-04-01

    Telemedicine and the medical "big data" era in ophthalmology highlight the use of non-mydriatic ocular fundus photography, which has given rise to indispensable applications of portable fundus cameras. However, in the case of portable fundus photography, non-mydriatic image quality is more vulnerable to distortions, such as uneven illumination, color distortion, blur, and low contrast. Such distortions are called generic quality distortions. This paper proposes an algorithm capable of selecting images of fair generic quality that would be especially useful to assist inexperienced individuals in collecting meaningful and interpretable data with consistency. The algorithm is based on three characteristics of the human visual system--multi-channel sensation, just noticeable blur, and the contrast sensitivity function to detect illumination and color distortion, blur, and low contrast distortion, respectively. A total of 536 retinal images, 280 from proprietary databases and 256 from public databases, were graded independently by one senior and two junior ophthalmologists, such that three partial measures of quality and generic overall quality were classified into two categories. Binary classification was implemented by the support vector machine and the decision tree, and receiver operating characteristic (ROC) curves were obtained and plotted to analyze the performance of the proposed algorithm. The experimental results revealed that the generic overall quality classification achieved a sensitivity of 87.45% at a specificity of 91.66%, with an area under the ROC curve of 0.9452, indicating the value of applying the algorithm, which is based on the human vision system, to assess the image quality of non-mydriatic photography, especially for low-cost ophthalmological telemedicine applications.

  12. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    PubMed

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  13. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations.

    PubMed

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-09-23

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method.

  14. A New Approach for Combining Time-of-Flight and RGB Cameras Based on Depth-Dependent Planar Projective Transformations

    PubMed Central

    Salinas, Carlota; Fernández, Roemi; Montes, Héctor; Armada, Manuel

    2015-01-01

    Image registration for sensor fusion is a valuable technique to acquire 3D and colour information for a scene. Nevertheless, this process normally relies on feature-matching techniques, which is a drawback for combining sensors that are not able to deliver common features. The combination of ToF and RGB cameras is an instance that problem. Typically, the fusion of these sensors is based on the extrinsic parameter computation of the coordinate transformation between the two cameras. This leads to a loss of colour information because of the low resolution of the ToF camera, and sophisticated algorithms are required to minimize this issue. This work proposes a method for sensor registration with non-common features and that avoids the loss of colour information. The depth information is used as a virtual feature for estimating a depth-dependent homography lookup table (Hlut). The homographies are computed within sets of ground control points of 104 images. Since the distance from the control points to the ToF camera are known, the working distance of each element on the Hlut is estimated. Finally, two series of experimental tests have been carried out in order to validate the capabilities of the proposed method. PMID:26404315

  15. The Complementary Pinhole Camera.

    ERIC Educational Resources Information Center

    Bissonnette, D.; And Others

    1991-01-01

    Presents an experiment based on the principles of rectilinear motion of light operating in a pinhole camera that projects the image of an illuminated object through a small hole in a sheet to an image screen. (MDH)

  16. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  17. Relativistic Positron Creation Using Ultraintense Short Pulse Lasers

    SciTech Connect

    Chen Hui; Wilks, Scott C.; Bonlie, James D.; Price, Dwight F.; Beiersdorfer, Peter; Liang, Edison P.; Myatt, Jason; Meyerhofer, David D.

    2009-03-13

    We measure up to 2x10{sup 10} positrons per steradian ejected out the back of {approx}mm thick gold targets when illuminated with short ({approx}1 ps) ultraintense ({approx}1x10{sup 20} W/cm{sup 2}) laser pulses. Positrons are produced predominately by the Bethe-Heitler process and have an effective temperature of 2-4 MeV, with the distribution peaking at 4-7 MeV. The angular distribution of the positrons is anisotropic. Modeling based on the measurements indicate the positron density to be {approx}10{sup 16} positrons/cm{sup 3}, the highest ever created in the laboratory.

  18. PTZ Camera-Based Displacement Sensor System with Perspective Distortion Correction Unit for Early Detection of Building Destruction

    PubMed Central

    Jeong, Yoosoo; Park, Daejin; Park, Kil Houm

    2017-01-01

    This paper presents a pan-tilt-zoom (PTZ) camera-based displacement measurement system, specially based on the perspective distortion correction technique for the early detection of building destruction. The proposed PTZ-based vision system rotates the camera to monitor the specific targets from various distances and controls the zoom level of the lens for a constant field of view (FOV). The proposed approach adopts perspective distortion correction to expand the measurable range in monitoring the displacement of the target structure. The implemented system successfully obtains the displacement information in structures, which is not easily accessible on the remote site. We manually measured the displacement acquired from markers which is attached on a sample of structures covering a wide geographic region. Our approach using a PTZ-based camera reduces the perspective distortion, so that the improved system could overcome limitations of previous works related to displacement measurement. Evaluation results show that a PTZ-based displacement sensor system with the proposed distortion correction unit is possibly a cost effective and easy-to-install solution for commercialization. PMID:28241464

  19. An upgraded camera-based imaging system for mapping venous blood oxygenation in human skin tissue

    NASA Astrophysics Data System (ADS)

    Li, Jun; Zhang, Xiao; Qiu, Lina; Leotta, Daniel F.

    2016-07-01

    A camera-based imaging system was previously developed for mapping venous blood oxygenation in human skin. However, several limitations were realized in later applications, which could lead to either significant bias in the estimated oxygen saturation value or poor spatial resolution in the map of the oxygen saturation. To overcome these issues, an upgraded system was developed using improved modeling and image processing algorithms. In the modeling, Monte Carlo (MC) simulation was used to verify the effectiveness of the ratio-to-ratio method for semi-infinite and two-layer skin models, and then the relationship between the venous oxygen saturation and the ratio-to-ratio was determined. The improved image processing algorithms included surface curvature correction and motion compensation. The curvature correction is necessary when the imaged skin surface is uneven. The motion compensation is critical for the imaging system because surface motion is inevitable when the venous volume alteration is induced by cuff inflation. In addition to the modeling and image processing algorithms in the upgraded system, a ring light guide was used to achieve perpendicular and uniform incidence of light. Cross-polarization detection was also adopted to suppress surface specular reflection. The upgraded system was applied to mapping of venous oxygen saturation in the palm, opisthenar and forearm of human subjects. The spatial resolution of the oxygenation map achieved is much better than that of the original system. In addition, the mean values of the venous oxygen saturation for the three locations were verified with a commercial near-infrared spectroscopy system and were consistent with previously published data.

  20. Field-programmable gate array-based hardware architecture for high-speed camera with KAI-0340 CCD image sensor

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Yan, Su; Zhou, Zuofeng; Cao, Jianzhong; Yan, Aqi; Tang, Linao; Lei, Yangjie

    2013-08-01

    We present a field-programmable gate array (FPGA)-based hardware architecture for high-speed camera which have fast auto-exposure control and colour filter array (CFA) demosaicing. The proposed hardware architecture includes the design of charge coupled devices (CCD) drive circuits, image processing circuits, and power supply circuits. CCD drive circuits transfer the TTL (Transistor-Transistor-Logic) level timing Sequences which is produced by image processing circuits to the timing Sequences under which CCD image sensor can output analog image signals. Image processing circuits convert the analog signals to digital signals which is processing subsequently, and the TTL timing, auto-exposure control, CFA demosaicing, and gamma correction is accomplished in this module. Power supply circuits provide the power for the whole system, which is very important for image quality. Power noises effect image quality directly, and we reduce power noises by hardware way, which is very effective. In this system, the CCD is KAI-0340 which is can output 210 full resolution frame-per-second, and our camera can work outstandingly in this mode. The speed of traditional auto-exposure control algorithms to reach a proper exposure level is so slow that it is necessary to develop a fast auto-exposure control method. We present a new auto-exposure algorithm which is fit high-speed camera. Color demosaicing is critical for digital cameras, because it converts a Bayer sensor mosaic output to a full color image, which determines the output image quality of the camera. Complexity algorithm can acquire high quality but cannot implement in hardware. An low-complexity demosaicing method is presented which can implement in hardware and satisfy the demand of quality. The experiment results are given in this paper in last.

  1. Wavelet power spectrum-based autofocusing algorithm for time delayed and integration charge coupled device space camera.

    PubMed

    Tao, Shuping; Jin, Guang; Zhang, Xuyan; Qu, Hongsong; An, Yuan

    2012-07-20

    A novel autofocusing algorithm using the directional wavelet power spectrum is proposed for time delayed and integration charge coupled device (TDI CCD) space cameras, which overcomes the difficulty of focus measure for the real-time change of imaging scenes. Using the multiresolution and band-pass characteristics of wavelet transform to improve the power spectrum based on fast Fourier transform (FFT), the wavelet power spectrum is less sensitive to the variance of scenes. Moreover, the new focus measure can effectively eliminate the impact of image motion mismatching by the directional selection. We test the proposed method's performance on synthetic images as well as a real ground experiment for one TDI CCD prototype camera, and compare it with the focus measure based on the existing FFT spectrum. The simulation results show that the new focus measure can effectively express the defocused states for the real remote sensing images. The error ratio is only 0.112, while the prevalent algorithm based on the FFT spectrum is as high as 0.4. Compared with the FFT-based method, the proposed algorithm performs at a high reliability in the real imaging experiments, where it reduces the instability from 0.600 to 0.161. Two experimental results demonstrate that the proposed algorithm has the characteristics of good monotonicity, high sensitivity, and accuracy. The new algorithm can satisfy the autofocusing requirements for TDI CCD space cameras.

  2. Target Volume Delineation in Dynamic Positron Emission Tomography Based on Time Activity Curve Differences

    NASA Astrophysics Data System (ADS)

    Teymurazyan, Artur

    Tumor volume delineation plays a critical role in radiation treatment planning and simulation, since inaccurately defined treatment volumes may lead to the overdosing of normal surrounding structures and potentially missing the cancerous tissue. However, the imaging modality almost exclusively used to determine tumor volumes, X-ray Computed Tomography (CT), does not readily exhibit a distinction between cancerous and normal tissue. It has been shown that CT data augmented with PET can improve radiation treatment plans by providing functional information not available otherwise. Presently, static PET scans account for the majority of procedures performed in clinical practice. In the radiation therapy (RT) setting, these scans are visually inspected by a radiation oncologist for the purpose of tumor volume delineation. This approach, however, often results in significant interobserver variability when comparing contours drawn by different experts on the same PET/CT data sets. For this reason, a search for more objective contouring approaches is underway. The major drawback of conventional tumor delineation in static PET images is the fact that two neighboring voxels of the same intensity can exhibit markedly different overall dynamics. Therefore, equal intensity voxels in a static analysis of a PET image may be falsely classified as belonging to the same tissue. Dynamic PET allows the evaluation of image data in the temporal domain, which often describes specific biochemical properties of the imaged tissues. Analysis of dynamic PET data can be used to improve classification of the imaged volume into cancerous and normal tissue. In this thesis we present a novel tumor volume delineation approach (Single Seed Region Growing algorithm in 4D (dynamic) PET or SSRG/4D-PET) in dynamic PET based on TAC (Time Activity Curve) differences. A partially-supervised approach is pursued in order to allow an expert reader to utilize the information available from other imaging

  3. Positrons in surface physics

    NASA Astrophysics Data System (ADS)

    Hugenschmidt, Christoph

    2016-12-01

    Within the last decade powerful methods have been developed to study surfaces using bright low-energy positron beams. These novel analysis tools exploit the unique properties of positron interaction with surfaces, which comprise the absence of exchange interaction, repulsive crystal potential and positron trapping in delocalized surface states at low energies. By applying reflection high-energy positron diffraction (RHEPD) one can benefit from the phenomenon of total reflection below a critical angle that is not present in electron surface diffraction. Therefore, RHEPD allows the determination of the atom positions of (reconstructed) surfaces with outstanding accuracy. The main advantages of positron annihilation induced Auger-electron spectroscopy (PAES) are the missing secondary electron background in the energy region of Auger-transitions and its topmost layer sensitivity for elemental analysis. In order to enable the investigation of the electron polarization at surfaces low-energy spin-polarized positrons are used to probe the outermost electrons of the surface. Furthermore, in fundamental research the preparation of well defined surfaces tailored for the production of bound leptonic systems plays an outstanding role. In this report, it is envisaged to cover both the fundamental aspects of positron surface interaction and the present status of surface studies using modern positron beam techniques.

  4. Research on radiometric calibration of interline transfer CCD camera based on TDI working mode

    NASA Astrophysics Data System (ADS)

    Wu, Xing-xing; Liu, Jin-guo

    2010-10-01

    Interline transfer CCD camera can be designed to work in time delay and integration mode similar to TDI CCD to obtain higher responsivity and spatial resolution under poor illumination condition. However it was found that outputs of some pixels were much lower than others' as interline transfer CCD camera work in TDI mode in laboratory radiometric calibration experiments. As a result photo response non-uniformity(PRNU) and signal noise ratio(SNR) of the system turned for the worse. This phenomenon's mechanism was analyzed and improved PRNU and SNR algorithms of interline transfer CCD camera were advanced to solve this problem. In this way TDI stage was used as a variant in PRNU and SNR algorithms and system performance was improved observably with few influences on use. In validation experiments the improved algorithms was applied in radiometric calibration of a camera with KAI-0340s as detector. Results of validation experiments proved that the improved algorithms could effectively improve SNR and lower PRNU of the system. At the same time characteristic of the system could be reflected better. As working in 16 TDI stages, PRUN was reduced from 2.25% to 0.82% and SNR was improved about 2%.

  5. Positrons for linear colliders

    SciTech Connect

    Ecklund, S.

    1987-11-01

    The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)

  6. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  7. Comparison of - and Mutual Informaton Based Calibration of Terrestrial Laser Scanner and Digital Camera for Deformation Monitoring

    NASA Astrophysics Data System (ADS)

    Omidalizarandi, M.; Neumann, I.

    2015-12-01

    In the current state-of-the-art, geodetic deformation analysis of natural and artificial objects (e.g. dams, bridges,...) is an ongoing research in both static and kinematic mode and has received considerable interest by researchers and geodetic engineers. In this work, due to increasing the accuracy of geodetic deformation analysis, a terrestrial laser scanner (TLS; here the Zoller+Fröhlich IMAGER 5006) and a high resolution digital camera (Nikon D750) are integrated to complementarily benefit from each other. In order to optimally combine the acquired data of the hybrid sensor system, a highly accurate estimation of the extrinsic calibration parameters between TLS and digital camera is a vital preliminary step. Thus, the calibration of the aforementioned hybrid sensor system can be separated into three single calibrations: calibration of the camera, calibration of the TLS and extrinsic calibration between TLS and digital camera. In this research, we focus on highly accurate estimating extrinsic parameters between fused sensors and target- and targetless (mutual information) based methods are applied. In target-based calibration, different types of observations (image coordinates, TLS measurements and laser tracker measurements for validation) are utilized and variance component estimation is applied to optimally assign adequate weights to the observations. Space resection bundle adjustment based on the collinearity equations is solved using Gauss-Markov and Gauss-Helmert model. Statistical tests are performed to discard outliers and large residuals in the adjustment procedure. At the end, the two aforementioned approaches are compared and advantages and disadvantages of them are investigated and numerical results are presented and discussed.

  8. Compact pnCCD-based X-ray camera with high spatial and energy resolution: a color X-ray camera.

    PubMed

    Scharf, O; Ihle, S; Ordavo, I; Arkadiev, V; Bjeoumikhov, A; Bjeoumikhova, S; Buzanich, G; Gubzhokov, R; Günther, A; Hartmann, R; Kühbacher, M; Lang, M; Langhoff, N; Liebel, A; Radtke, M; Reinholz, U; Riesemeier, H; Soltau, H; Strüder, L; Thünemann, A F; Wedell, R

    2011-04-01

    For many applications there is a requirement for nondestructive analytical investigation of the elemental distribution in a sample. With the improvement of X-ray optics and spectroscopic X-ray imagers, full field X-ray fluorescence (FF-XRF) methods are feasible. A new device for high-resolution X-ray imaging, an energy and spatial resolving X-ray camera, is presented. The basic idea behind this so-called "color X-ray camera" (CXC) is to combine an energy dispersive array detector for X-rays, in this case a pnCCD, with polycapillary optics. Imaging is achieved using multiframe recording of the energy and the point of impact of single photons. The camera was tested using a laboratory 30 μm microfocus X-ray tube and synchrotron radiation from BESSY II at the BAMline facility. These experiments demonstrate the suitability of the camera for X-ray fluorescence analytics. The camera simultaneously records 69,696 spectra with an energy resolution of 152 eV for manganese K(α) with a spatial resolution of 50 μm over an imaging area of 12.7 × 12.7 mm(2). It is sensitive to photons in the energy region between 3 and 40 keV, limited by a 50 μm beryllium window, and the sensitive thickness of 450 μm of the chip. Online preview of the sample is possible as the software updates the sums of the counts for certain energy channel ranges during the measurement and displays 2-D false-color maps as well as spectra of selected regions. The complete data cube of 264 × 264 spectra is saved for further qualitative and quantitative processing.

  9. Photometric-based recovery of illuminant-free color images using a red-green-blue digital camera

    NASA Astrophysics Data System (ADS)

    Luis Nieves, Juan; Plata, Clara; Valero, Eva M.; Romero, Javier

    2012-01-01

    Albedo estimation has traditionally been used to make computational simulations of real objects under different conditions, but as yet no device is capable of measuring albedo directly. The aim of this work is to introduce a photometric-based color imaging framework that can estimate albedo and can reproduce the appearance both indoors and outdoors of images under different lights and illumination geometry. Using a calibration sample set composed of chips made of the same material but different colors and textures, we compare two photometric-stereo techniques, one of them avoiding the effect of shadows and highlights in the image and the other ignoring this constraint. We combined a photometric-stereo technique and a color-estimation algorithm that directly relates the camera sensor outputs with the albedo values. The proposed method can produce illuminant-free images with good color accuracy when a three-channel red-green-blue (RGB) digital camera is used, even outdoors under solar illumination.

  10. Feasibility study of a ``4H'' X-ray camera based on GaAs:Cr sensor

    NASA Astrophysics Data System (ADS)

    Dragone, A.; Kenney, C.; Lozinskaya, A.; Tolbanov, O.; Tyazhev, A.; Zarubin, A.; Wang, Zhehui

    2016-11-01

    A multilayer stacked X-ray camera concept is described. This type of technology is called `4H' X-ray cameras, where 4H stands for high-Z (Z>30) sensor, high-resolution (less than 300 micron pixel pitch), high-speed (above 100 MHz), and high-energy (above 30 keV in photon energy). The components of the technology, similar to the popular two-dimensional (2D) hybrid pixelated array detectors, consists of GaAs:Cr sensors bonded to high-speed ASICs. 4H cameras based on GaAs also use integration mode of X-ray detection. The number of layers, on the order of ten, is smaller than an earlier configuration for single-photon-counting (SPC) mode of detection [1]. High-speed ASIC based on modification to the ePix family of ASIC is discussed. Applications in X-ray free electron lasers (XFELs), synchrotrons, medicine and non-destructive testing are possible.

  11. Use of a smart phone based thermo camera for skin prick allergy testing: a feasibility study (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert

    2016-02-01

    Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.

  12. Focal thyroid incidentaloma on whole body fluorodeoxyglucose positron emission tomography/computed tomography in known cancer patients: A case-based discussion with a series of three examples.

    PubMed

    Targe, Mangala; Basu, Sandip

    2015-01-01

    The importance, imaging characteristics and outcome of focal thyroid incidentaloma on fluorodeoxyglucose-positron emission tomography/computed tomography (FDG-PET/CT) have been illustrated in this report. This is drawn from a series of three case examples of proven malignancy at different locations, with three different thyroid cytopathological diagnoses. Subsequently, a case-based discussion on present consensus of the management of this entity has been undertaken including certain specific aspects of PET-CT interpretation and its role in this setting.

  13. Performance of the Tachyon Time-of-Flight PET Camera.

    PubMed

    Peng, Q; Choong, W-S; Vu, C; Huber, J S; Janecek, M; Wilson, D; Huesman, R H; Qi, Jinyi; Zhou, Jian; Moses, W W

    2015-02-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 × 25 mm(2) side of 6.15 × 6.15 × 25 mm(3) LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  14. Performance of the Tachyon Time-of-Flight PET Camera

    PubMed Central

    Peng, Q.; Choong, W.-S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-01

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon’s detector module is optimized for timing by coupling the 6.15 × 25 mm2 side of 6.15 × 6.15 × 25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/− ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. The results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3. PMID:26594057

  15. Performance of the Tachyon Time-of-Flight PET Camera

    DOE PAGES

    Peng, Q.; Choong, W. -S.; Vu, C.; ...

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMAmore » NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.« less

  16. Performance of the Tachyon Time-of-Flight PET Camera

    SciTech Connect

    Peng, Q.; Choong, W. -S.; Vu, C.; Huber, J. S.; Janecek, M.; Wilson, D.; Huesman, R. H.; Qi, Jinyi; Zhou, Jian; Moses, W. W.

    2015-01-23

    We have constructed and characterized a time-of-flight Positron Emission Tomography (TOF PET) camera called the Tachyon. The Tachyon is a single-ring Lutetium Oxyorthosilicate (LSO) based camera designed to obtain significantly better timing resolution than the ~ 550 ps found in present commercial TOF cameras, in order to quantify the benefit of improved TOF resolution for clinically relevant tasks. The Tachyon's detector module is optimized for timing by coupling the 6.15 ×25 mm2 side of 6.15 ×6.15 ×25 mm3 LSO scintillator crystals onto a 1-inch diameter Hamamatsu R-9800 PMT with a super-bialkali photocathode. We characterized the camera according to the NEMA NU 2-2012 standard, measuring the energy resolution, timing resolution, spatial resolution, noise equivalent count rates and sensitivity. The Tachyon achieved a coincidence timing resolution of 314 ps +/- 20 ps FWHM over all crystal-crystal combinations. Experiments were performed with the NEMA body phantom to assess the imaging performance improvement over non-TOF PET. We find that the results show that at a matched contrast, incorporating 314 ps TOF reduces the standard deviation of the contrast by a factor of about 2.3.

  17. Infrared camera based thermometry for quality assurance of superficial hyperthermia applicators

    NASA Astrophysics Data System (ADS)

    Müller, Johannes; Hartmann, Josefin; Bert, Christoph

    2016-04-01

    The purpose of this work was to provide a feasible and easy to apply phantom-based quality assurance (QA) procedure for superficial hyperthermia (SHT) applicators by means of infrared (IR) thermography. The VarioCAM hr head (InfraTec, Dresden, Germany) was used to investigate the SA-812, the SA-510 and the SA-308 applicators (all: Pyrexar Medical, Salt Lake City, UT, USA). Probe referencing and thermal equilibrium procedures were applied to determine the emissivity of the muscle-equivalent agar phantom. Firstly, the disturbing potential of thermal conduction on the temperature distribution inside the phantom was analyzed through measurements after various heating times (5-50 min). Next, the influence of the temperature of the water bolus between the SA-812 applicator and the phantom’s surface was evaluated by varying its temperature. The results are presented in terms of characteristic values (extremal temperatures, percentiles and effective field sizes (EFS)) and temperature-area-histograms (TAH). Lastly, spiral antenna applicators were compared by the introduced characteristics. The emissivity of the used phantom was found to be ɛ  =  0.91  ±  0.03, the results of both methods coincided. The influence of thermal conduction with regard to heating time was smaller than expected; the EFS of the SA-812 applicator had a size of (68.6  ±  6.7) cm2, averaged group variances were  ±3.0 cm2. The TAHs show that the influence of the water bolus is mostly limited to depths of  <3 cm, yet it can greatly enhance or reduce heat generation in this regime: at a depth of 1 cm, measured maximal temperature rises were 14.5 °C for T Bolus  =  30 °C and 8.6 °C for T Bolus  =  21 °C, respectively. The EFS was increased, too. The three spiral antenna applicators generated similar heat distributions. Generally, the procedure proved to yield informative insights into applicator characteristics, thus making the application

  18. Design of belief propagation based on FPGA for the multistereo CAFADIS camera.

    PubMed

    Magdaleno, Eduardo; Lüke, Jonás Philipp; Rodríguez, Manuel; Rodríguez-Ramos, José Manuel

    2010-01-01

    In this paper we describe a fast, specialized hardware implementation of the belief propagation algorithm for the CAFADIS camera, a new plenoptic sensor patented by the University of La Laguna. This camera captures the lightfield of the scene and can be used to find out at which depth each pixel is in focus. The algorithm has been designed for FPGA devices using VHDL. We propose a parallel and pipeline architecture to implement the algorithm without external memory. Although the BRAM resources of the device increase considerably, we can maintain real-time restrictions by using extremely high-performance signal processing capability through parallelism and by accessing several memories simultaneously. The quantifying results with 16 bit precision have shown that performances are really close to the original Matlab programmed algorithm.

  19. Thresholding schemes for visible light communications with CMOS camera using entropy-based algorithms.

    PubMed

    Liang, Kevin; Chow, Chi-Wai; Liu, Yang; Yeh, Chien-Hung

    2016-10-31

    Recent visible light communication (VLC) studies mainly used positive-intrinsic-negative (PIN) and avalanche photodiode (APD). VLC using embedded complementary-metal-oxide-semiconductor (CMOS) camera is attractive. Using the rolling shutter effect of CMOS camera can increase the VLC data rate; and different techniques have been proposed for improving the demodulation of the rolling shutter pattern. Important steps to demodulate the rolling shutter pattern are the smoothing and the application of efficient thresholding to distinguish data logic. Here, we propose and demonstrate for the first time two entropy thresholding algorithms, including maximum entropy thresholding and minimum cross entropy thresholding. Experimental evaluation to compare their bit-error-rate (BER) performances and efficiencies are also performed.

  20. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  1. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  2. On-orbit calibration approach for star cameras based on the iteration method with variable weights.

    PubMed

    Wang, Mi; Cheng, Yufeng; Yang, Bo; Chen, Xiao

    2015-07-20

    To perform efficient on-orbit calibration for star cameras, we developed an attitude-independent calibration approach for global optimization and noise removal by least-square estimation using multiple star images, with which the optimal principal point, focal length, and the high-order focal plane distortion can be obtained in one step in full consideration of the interaction among star camera parameters. To avoid the problem when stars could be misidentified in star images, an iteration method with variable weights is introduced to eliminate the influence of misidentified star pairs. The approach can increase the precision of least-square estimation and use fewer star images. The proposed approach has been well verified to be precise and robust in three experiments.

  3. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates

    PubMed Central

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a ‘control’ and ‘treatment’ survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28–9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40–3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted. PMID:27050816

  4. Scent Lure Effect on Camera-Trap Based Leopard Density Estimates.

    PubMed

    Braczkowski, Alexander Richard; Balme, Guy Andrew; Dickman, Amy; Fattebert, Julien; Johnson, Paul; Dickerson, Tristan; Macdonald, David Whyte; Hunter, Luke

    2016-01-01

    Density estimates for large carnivores derived from camera surveys often have wide confidence intervals due to low detection rates. Such estimates are of limited value to authorities, which require precise population estimates to inform conservation strategies. Using lures can potentially increase detection, improving the precision of estimates. However, by altering the spatio-temporal patterning of individuals across the camera array, lures may violate closure, a fundamental assumption of capture-recapture. Here, we test the effect of scent lures on the precision and veracity of density estimates derived from camera-trap surveys of a protected African leopard population. We undertook two surveys (a 'control' and 'treatment' survey) on Phinda Game Reserve, South Africa. Survey design remained consistent except a scent lure was applied at camera-trap stations during the treatment survey. Lures did not affect the maximum movement distances (p = 0.96) or temporal activity of female (p = 0.12) or male leopards (p = 0.79), and the assumption of geographic closure was met for both surveys (p >0.05). The numbers of photographic captures were also similar for control and treatment surveys (p = 0.90). Accordingly, density estimates were comparable between surveys (although estimates derived using non-spatial methods (7.28-9.28 leopards/100km2) were considerably higher than estimates from spatially-explicit methods (3.40-3.65 leopards/100km2). The precision of estimates from the control and treatment surveys, were also comparable and this applied to both non-spatial and spatial methods of estimation. Our findings suggest that at least in the context of leopard research in productive habitats, the use of lures is not warranted.

  5. MOEMS-based time-of-flight camera for 3D video capturing

    NASA Astrophysics Data System (ADS)

    You, Jang-Woo; Park, Yong-Hwa; Cho, Yong-Chul; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Lee, Seung-Wan

    2013-03-01

    We suggest a Time-of-Flight (TOF) video camera capturing real-time depth images (a.k.a depth map), which are generated from the fast-modulated IR images utilizing a novel MOEMS modulator having switching speed of 20 MHz. In general, 3 or 4 independent IR (e.g. 850nm) images are required to generate a single frame of depth image. Captured video image of a moving object frequently shows motion drag between sequentially captured IR images, which results in so called `motion blur' problem even when the frame rate of depth image is fast (e.g. 30 to 60 Hz). We propose a novel `single shot' TOF 3D camera architecture generating a single depth image out of synchronized captured IR images. The imaging system constitutes of 2x2 imaging lens array, MOEMS optical shutters (modulator) placed on each lens aperture and a standard CMOS image sensor. The IR light reflected from object is modulated by optical shutters on the apertures of 2x2 lens array and then transmitted images are captured on the image sensor resulting in 2x2 sub-IR images. As a result, the depth image is generated with those simultaneously captured 4 independent sub-IR images, hence the motion blur problem is canceled. The resulting performance is very useful in the applications of 3D camera to a human-machine interaction device such as user interface of TV, monitor, or hand held devices and motion capturing of human body. In addition, we show that the presented 3D camera can be modified to capture color together with depth image simultaneously on `single shot' frame rate.

  6. A risk-based coverage model for video surveillance camera control optimization

    NASA Astrophysics Data System (ADS)

    Zhang, Hongzhou; Du, Zhiguo; Zhao, Xingtao; Li, Peiyue; Li, Dehua

    2015-12-01

    Visual surveillance system for law enforcement or police case investigation is different from traditional application, for it is designed to monitor pedestrians, vehicles or potential accidents. Visual surveillance risk is defined as uncertainty of visual information of targets and events monitored in present work and risk entropy is introduced to modeling the requirement of police surveillance task on quality and quantity of vide information. the prosed coverage model is applied to calculate the preset FoV position of PTZ camera.

  7. Hardware-based smart camera for recovering high dynamic range video from multiple exposures

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2014-10-01

    In many applications such as video surveillance or defect detection, the perception of information related to a scene is limited in areas with strong contrasts. The high dynamic range (HDR) capture technique can deal with these limitations. The proposed method has the advantage of automatically selecting multiple exposure times to make outputs more visible than fixed exposure ones. A real-time hardware implementation of the HDR technique that shows more details both in dark and bright areas of a scene is an important line of research. For this purpose, we built a dedicated smart camera that performs both capturing and HDR video processing from three exposures. What is new in our work is shown through the following points: HDR video capture through multiple exposure control, HDR memory management, HDR frame generation, and representation under a hardware context. Our camera achieves a real-time HDR video output at 60 fps at 1.3 megapixels and demonstrates the efficiency of our technique through an experimental result. Applications of this HDR smart camera include the movie industry, the mass-consumer market, military, automotive industry, and surveillance.

  8. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis

    PubMed Central

    Shaukat, Affan; Blacker, Peter C.; Spiteri, Conrad; Gao, Yang

    2016-01-01

    In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. PMID:27879625

  9. Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis.

    PubMed

    Shaukat, Affan; Blacker, Peter C; Spiteri, Conrad; Gao, Yang

    2016-11-20

    In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation.

  10. An accelerated threshold-based back-projection algorithm for Compton camera image reconstruction

    SciTech Connect

    Mundy, Daniel W.; Herman, Michael G.

    2011-01-15

    Purpose: Compton camera imaging (CCI) systems are currently under investigation for radiotherapy dose reconstruction and verification. The ability of such a system to provide real-time images during dose delivery will be limited by the computational speed of the image reconstruction algorithm. In this work, the authors present a fast and simple method by which to generate an initial back-projected image from acquired CCI data, suitable for use in a filtered back-projection algorithm or as a starting point for iterative reconstruction algorithms, and compare its performance to the current state of the art. Methods: Each detector event in a CCI system describes a conical surface that includes the true point of origin of the detected photon. Numerical image reconstruction algorithms require, as a first step, the back-projection of each of these conical surfaces into an image space. The algorithm presented here first generates a solution matrix for each slice of the image space by solving the intersection of the conical surface with the image plane. Each element of the solution matrix is proportional to the distance of the corresponding voxel from the true intersection curve. A threshold function was developed to extract those pixels sufficiently close to the true intersection to generate a binary intersection curve. This process is repeated for each image plane for each CCI detector event, resulting in a three-dimensional back-projection image. The performance of this algorithm was tested against a marching algorithm known for speed and accuracy. Results: The threshold-based algorithm was found to be approximately four times faster than the current state of the art with minimal deficit to image quality, arising from the fact that a generically applicable threshold function cannot provide perfect results in all situations. The algorithm fails to extract a complete intersection curve in image slices near the detector surface for detector event cones having axes nearly

  11. Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.

    2013-09-01

    A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.

  12. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition.

    PubMed

    Sun, Ryan; Bouchard, Matthew B; Hillman, Elizabeth M C

    2010-08-02

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software's framework and provide details to guide users with development of this and similar software.

  13. On-Line Detection of Defects on Fruit by Machinevision Systems Based on Three-Color-Cameras Systems

    NASA Astrophysics Data System (ADS)

    Xul, Qiaobao; Zou, Xiaobo; Zhao, Jiewen

    How to identify apple stem-ends and calyxes from defects is still a challenging project due to the complexity of the process. It is know that the stem-ends and calyxes could not appear at the same image. Therefore, a contaminated apple distinguishing method is developed in this article. That is, if there are two or more doubtful blobs on an applés image, the apple is contaminated one. There is no complex imaging process and pattern recognition in this method, because it is only need to find how many blobs (including the stem-ends and calyxes) in an applés image. Machine vision systems which based 3 color cameras are presented in this article regarding the online detection of external defects. On this system, the fruits placed on rollers are rotating while moving, and each camera which placed on the line grabs 3 images from an apple. After the apple segmented from the black background by multi-thresholds method, defect's segmentation and counting is performed on the applés images. Good separation between normal and contaminated apples was obtained for threecamera system (94.5%), comparing to one-camera system (63.3%), twocamera system (83.7%). The disadvantage of this method is that it could not distinguish different defects types. Defects of apples, such as bruising, scab, fungal growth, and disease, are treated as the same.

  14. Neutron cameras for ITER

    SciTech Connect

    Johnson, L.C.; Barnes, C.W.; Batistoni, P.

    1998-12-31

    Neutron cameras with horizontal and vertical views have been designed for ITER, based on systems used on JET and TFTR. The cameras consist of fan-shaped arrays of collimated flight tubes, with suitably chosen detectors situated outside the biological shield. The sight lines view the ITER plasma through slots in the shield blanket and penetrate the vacuum vessel, cryostat, and biological shield through stainless steel windows. This paper analyzes the expected performance of several neutron camera arrangements for ITER. In addition to the reference designs, the authors examine proposed compact cameras, in which neutron fluxes are inferred from {sup 16}N decay gammas in dedicated flowing water loops, and conventional cameras with fewer sight lines and more limited fields of view than in the reference designs. It is shown that the spatial sampling provided by the reference designs is sufficient to satisfy target measurement requirements and that some reduction in field of view may be permissible. The accuracy of measurements with {sup 16}N-based compact cameras is not yet established, and they fail to satisfy requirements for parameter range and time resolution by large margins.

  15. CCD Camera

    DOEpatents

    Roth, Roger R.

    1983-01-01

    A CCD camera capable of observing a moving object which has varying intensities of radiation eminating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other.

  16. CCD Camera

    DOEpatents

    Roth, R.R.

    1983-08-02

    A CCD camera capable of observing a moving object which has varying intensities of radiation emanating therefrom and which may move at varying speeds is shown wherein there is substantially no overlapping of successive images and wherein the exposure times and scan times may be varied independently of each other. 7 figs.

  17. Quality controls for gamma cameras and PET cameras: development of a free open-source ImageJ program

    NASA Astrophysics Data System (ADS)

    Carlier, Thomas; Ferrer, Ludovic; Berruchon, Jean B.; Cuissard, Regis; Martineau, Adeline; Loonis, Pierre; Couturier, Olivier

    2005-04-01

    Acquisition data and treatments for quality controls of gamma cameras and Positron Emission Tomography (PET) cameras are commonly performed with dedicated program packages, which are running only on manufactured computers and differ from each other, depending on camera company and program versions. The aim of this work was to develop a free open-source program (written in JAVA language) to analyze data for quality control of gamma cameras and PET cameras. The program is based on the free application software ImageJ and can be easily loaded on any computer operating system (OS) and thus on any type of computer in every nuclear medicine department. Based on standard parameters of quality control, this program includes 1) for gamma camera: a rotation center control (extracted from the American Association of Physics in Medicine, AAPM, norms) and two uniformity controls (extracted from the Institute of Physics and Engineering in Medicine, IPEM, and National Electronic Manufacturers Association, NEMA, norms). 2) For PET systems, three quality controls recently defined by the French Medical Physicist Society (SFPM), i.e. spatial resolution and uniformity in a reconstructed slice and scatter fraction, are included. The determination of spatial resolution (thanks to the Point Spread Function, PSF, acquisition) allows to compute the Modulation Transfer Function (MTF) in both modalities of cameras. All the control functions are included in a tool box which is a free ImageJ plugin and could be soon downloaded from Internet. Besides, this program offers the possibility to save on HTML format the uniformity quality control results and a warning can be set to automatically inform users in case of abnormal results. The architecture of the program allows users to easily add any other specific quality control program. Finally, this toolkit is an easy and robust tool to perform quality control on gamma cameras and PET cameras based on standard computation parameters, is free, run on

  18. Demonstration of First 9 Micron cutoff 640 x 486 GaAs Based Quantum Well Infrared PhotoDetector (QWIP) Snap-Shot Camera

    NASA Technical Reports Server (NTRS)

    Gunapala, S.; Bandara, S. V.; Liu, J. K.; Hong, W.; Sundaram, M.; Maker, P. D.; Muller, R. E.

    1997-01-01

    In this paper, we discuss the development of this very sensitive long waelength infrared (LWIR) camera based on a GaAs/AlGaAs QWIP focal plane array (FPA) and its performance in quantum efficiency, NEAT, uniformity, and operability.

  19. CHARACTERIZATION OF PLASTICALLY-INDUCED STRUCTURAL CHANGES IN A Zr-BASED BULK METALLIC GLASS USING POSITRON ANNIHILATION SPECTROCOPY

    SciTech Connect

    Flores, K M; Kanungo, B P; Glade, S C; Asoka-Kumar, P

    2005-09-16

    Flow in metallic glasses is associated with stress-induced cooperative rearrangements of small groups of atoms involving the surrounding free volume. Understanding the details of these rearrangements therefore requires knowledge of the amount and distribution of the free volume and how that distribution evolves with deformation. The present study employs positron annihilation spectroscopy to investigate the free volume change in Zr{sub 58.5}Cu{sub 15.6}Ni{sub 12.8}Al{sub 10.3}Nb{sub 2.8} bulk metallic glass after inhomogeneous plastic deformation by cold rolling and structural relaxation by annealing. Results indicate that the size distribution of open volume sites is at least bimodal. The size and concentration of the larger group, identified as flow defects, changes with processing. Following initial plastic deformation the size of the flow defects increases, consistent with the free volume theory for flow. Following more extensive deformation, however, the size distribution of the positron traps shifts, with much larger open volume sites forming at the expense of the flow defects. This suggests that a critical strain is required for flow defects to coalesce and form more stable nanovoids, which have been observed elsewhere by high resolution TEM. Although these results suggest the presence of three distinct open volume size groups, further analysis indicates that all groups have the same line shape parameter. This is in contrast to the distinctly different interactions observed in crystalline materials with multiple defect types. This similarity may be due to the disordered structure of the glass and positron affinity to particular atoms surrounding open-volume regions.

  20. Positron diffusion in Si

    SciTech Connect

    Nielsen, B.; Lynn, K.G.; Vehanen, A.; Schultz, P.J.

    1985-06-01

    Positron diffusion in Si(100) and Si(111) has been studied using a variable energy positron beam. The positron diffusion coefficient is found to be D/sub +/ = 2.7 +- 0.3 cm/sup 2//sec using a Makhov-type positron implantation profile, which is demonstrated to fit the data more reliably than the more commonly applied exponential profile. The diffusion related parameter, E/sub 0/, which results from the exponential profile, is found to be 4.2 +- 0.2 keV, significantly longer than previously reported values. A drastic reduction in E/sub 0/ is found after annealing the sample at 1300 K, showing that previously reported low values of E/sub 0/ are probably associated with the thermal history of the sample.

  1. The color measurement system for spot color printing basing multispectral camera

    NASA Astrophysics Data System (ADS)

    Liu, Nanbo; Jin, Weiqi; Huang, Qinmei; Song, Li

    2014-11-01

    Color measurement and control of printing has been an important issue in computer vision technology . In the past, people have used density meter and spectrophotometer to measure the color of printing product. For the color management of 4 color press, by these kind meters, people can measure the color data from color bar printed at the side of sheet, then do ink key presetting. This way have wide application in printing field. However, it can not be used in the case that is to measure the color of spot color printing and printing pattern directly. With the development of multispectral image acquisition, it makes possible to measure the color of printing pattern in any area of the pattern by CCD camera than can acquire the whole image of sheet in high resolution. This essay give a way to measure the color of printing by multispectral camera in the process of printing. A 12 channel spectral camera with high intensity white LED illumination that have driven by a motor, scans the printing sheet. Then we can get the image, this image can include color and printing quality information of each pixel, LAB value and CMYK value of each pixel can be got by reconstructing the reflectance spectra of printing image. By this data processing, we can measure the color of spot color printing and control it. Through the spot test in the printing plant, the results show this way can get not only the color bar density value but also ROI color value. By the value, we can do ink key presetting, that makes it true to control the spot color automatically in high precision.

  2. Clinical application of in vivo treatment delivery verification based on PET/CT imaging of positron activity induced at high energy photon therapy

    NASA Astrophysics Data System (ADS)

    Janek Strååt, Sara; Andreassen, Björn; Jonsson, Cathrine; Noz, Marilyn E.; Maguire, Gerald Q., Jr.; Näfstadius, Peder; Näslund, Ingemar; Schoenahl, Frederic; Brahme, Anders

    2013-08-01

    The purpose of this study was to investigate in vivo verification of radiation treatment with high energy photon beams using PET/CT to image the induced positron activity. The measurements of the positron activation induced in a preoperative rectal cancer patient and a prostate cancer patient following 50 MV photon treatments are presented. A total dose of 5 and 8 Gy, respectively, were delivered to the tumors. Imaging was performed with a 64-slice PET/CT scanner for 30 min, starting 7 min after the end of the treatment. The CT volume from the PET/CT and the treatment planning CT were coregistered by matching anatomical reference points in the patient. The treatment delivery was imaged in vivo based on the distribution of the induced positron emitters produced by photonuclear reactions in tissue mapped on to the associated dose distribution of the treatment plan. The results showed that spatial distribution of induced activity in both patients agreed well with the delivered beam portals of the treatment plans in the entrance subcutaneous fat regions but less so in blood and oxygen rich soft tissues. For the preoperative rectal cancer patient however, a 2 ± (0.5) cm misalignment was observed in the cranial-caudal direction of the patient between the induced activity distribution and treatment plan, indicating a beam patient setup error. No misalignment of this kind was seen in the prostate cancer patient. However, due to a fast patient setup error in the PET/CT scanner a slight mis-position of the patient in the PET/CT was observed in all three planes, resulting in a deformed activity distribution compared to the treatment plan. The present study indicates that the induced positron emitters by high energy photon beams can be measured quite accurately using PET imaging of subcutaneous fat to allow portal verification of the delivered treatment beams. Measurement of the induced activity in the patient 7 min after receiving 5 Gy involved count rates which were about

  3. COMPACT CdZnTe-BASED GAMMA CAMERA FOR PROSTATE CANCER IMAGING

    SciTech Connect

    CUI, Y.; LALL, T.; TSUI, B.; YU, J.; MAHLER, G.; BOLOTNIKOV, A.; VASKA, P.; DeGERONIMO, G.; O'CONNOR, P.; MEINKEN, G.; JOYAL, J.; BARRETT, J.; CAMARDA, G.; HOSSAIN, A.; KIM, K.H.; YANG, G.; POMPER, M.; CHO, S.; WEISMAN, K.; SEO, Y.; BABICH, J.; LaFRANCE, N.; AND JAMES, R.B.

    2011-10-23

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high false-positive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integrated-circuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  4. Real object-based integral imaging system using a depth camera and a polygon model

    NASA Astrophysics Data System (ADS)

    Jeong, Ji-Seong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Lim, Byung-Muk; Jang, Ho-Wook; Kim, Nam; Yoo, Kwan-Hee

    2017-01-01

    An integral imaging system using a polygon model for a real object is proposed. After depth and color data of the real object are acquired by a depth camera, the grid of the polygon model is converted from the initially reconstructed point cloud model. The elemental image array is generated from the polygon model and directly reconstructed. The polygon model eliminates the failed picking area between the points of a point cloud model, so at least the quality of the reconstructed 3-D image is significantly improved. The theory is verified experimentally, and higher-quality images are obtained.

  5. Development of a Compton camera for medical applications based on silicon strip and scintillation detectors

    NASA Astrophysics Data System (ADS)

    Krimmer, J.; Ley, J.-L.; Abellan, C.; Cachemiche, J.-P.; Caponetto, L.; Chen, X.; Dahoumane, M.; Dauvergne, D.; Freud, N.; Joly, B.; Lambert, D.; Lestand, L.; Létang, J. M.; Magne, M.; Mathez, H.; Maxim, V.; Montarou, G.; Morel, C.; Pinto, M.; Ray, C.; Reithinger, V.; Testa, E.; Zoccarato, Y.

    2015-07-01

    A Compton camera is being developed for the purpose of ion-range monitoring during hadrontherapy via the detection of prompt-gamma rays. The system consists of a scintillating fiber beam tagging hodoscope, a stack of double sided silicon strip detectors (90×90×2 mm3, 2×64 strips) as scatter detectors, as well as bismuth germanate (BGO) scintillation detectors (38×35×30 mm3, 100 blocks) as absorbers. The individual components will be described, together with the status of their characterization.

  6. Dynamic phase measurements based on a polarization Michelson interferometer employing a pixelated polarization camera

    NASA Astrophysics Data System (ADS)

    Serrano-Garcia, David I.; Otani, Yukitoshi

    2017-02-01

    We implemented an interferometric configuration capable of following a phase variation in time. By using a pixelated polarization camera, the system is able to retrieve the phase information instantaneously avoiding the usage of moving components and the necessity of an extra replication method attached at the output of the interferometer. Taking into account the temporal stability obtained from the system, a spatial-temporal phase demodulation algorithm can be implemented on frequency domain for the dynamic phase measurement. Spatial resolution is analyzed experimentally using a USAF pattern, and dynamic phase measurements were done on air and water medium variations due to a jet flame and a living fish as a biological sample, respectively.

  7. A fast moving object detection method based on 2D laser scanner and infrared camera

    NASA Astrophysics Data System (ADS)

    Zeng, Lina; Ding, Meng; Zhang, Tianci; Sun, Zejun

    2015-10-01

    Moving object detection is a major research direction of video surveillance systems. This paper proposes a novel approach for moving object detection by fusing information from the laser scanner and infrared camera. First, in accordance with the feature of laser scanner data, we apply robust principal component analysis (RPCA) to studying moving object detection. Then the depth and angle information of moving objects is mapped to the infrared image pixels so as to obtain the regions of interest (ROI). Finally, moving objects can be recognized by making investigation of the ROI. Experimental results show that this method has good real-time performance and accuracy.

  8. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  9. Total cross sections for positrons scattered elastically from helium based on new measurements of total ionization cross sections

    NASA Technical Reports Server (NTRS)

    Diana, L. M.; Chaplin, R. L.; Brooks, D. L.; Adams, J. T.; Reyna, L. K.

    1990-01-01

    An improved technique is presented for employing the 2.3m spectrometer to measure total ionization cross sections, Q sub ion, for positrons incident on He. The new ionization cross section agree with the values reported earlier. Estimates are also presented of total elastic scattering cross section, Q sub el, obtained by subtracting from total scattering cross sections, Q sub tot, reported in the literature, the Q sub ion and Q sub Ps (total positronium formation cross sections) and total excitation cross sections, Q sub ex, published by another researcher. The Q sub ion and Q sub el measured with the 3m high resolution time-of-flight spectrometer for 54.9eV positrons are in accord with the results from the 2.3m spectrometer. The ionization cross sections are in fair agreement with theory tending for the most part to be higher, especially at 76.3 and 88.5eV. The elastic cross section agree quite well with theory to the vicinity of 50eV, but at 60eV and above the experimental elastic cross sections climb to and remain at about 0.30 pi a sub o sq while the theoretical values steadily decrease.

  10. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  11. Visual odometry based on structural matching of local invariant features using stereo camera sensor.

    PubMed

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields.

  12. Positron annihilation studies of organic superconductivity

    SciTech Connect

    Yen, H.L.; Lou, Y.; Ali, E.H.

    1994-09-01

    The positron lifetimes of two organic superconductors, {kappa}-(ET){sub 2}Cu(NCS){sub 2} and {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br, are measured as a function of temperature across {Tc}. A drop of positron lifetime below {Tc} is observed. Positron-electron momentum densities are measured by using 2D-ACAR to search for the Fermi surface in {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br. Positron density distributions and positron-electron overlaps are calculated by using the orthogonalized linear combination atomic orbital (OLCAO) method to interprete the temperature dependence due to the local charge transfer which is inferred to relate to the superconducting transition. 2D-ACAR results in {kappa}-(ET){sub 2}Cu[N(CN){sub 2}]Br are compared with theoretical band calculations based on a first-principles local density approximation. Importance of performing accurate band calculations for the interpretation of positron annihilation data is emphasized.

  13. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  14. Individualized Positron Emission Tomography–Based Isotoxic Accelerated Radiation Therapy Is Cost-Effective Compared With Conventional Radiation Therapy: A Model-Based Evaluation

    SciTech Connect

    Bongers, Mathilda L.; Coupé, Veerle M.H.; De Ruysscher, Dirk; Oberije, Cary; Lambin, Philippe; Uyl-de Groot, Cornelia A.

    2015-03-15

    Purpose: To evaluate long-term health effects, costs, and cost-effectiveness of positron emission tomography (PET)-based isotoxic accelerated radiation therapy treatment (PET-ART) compared with conventional fixed-dose CT-based radiation therapy treatment (CRT) in non-small cell lung cancer (NSCLC). Methods and Materials: Our analysis uses a validated decision model, based on data of 200 NSCLC patients with inoperable stage I-IIIB. Clinical outcomes, resource use, costs, and utilities were obtained from the Maastro Clinic and the literature. Primary model outcomes were the difference in life-years (LYs), quality-adjusted life-years (QALYs), costs, and the incremental cost-effectiveness and cost/utility ratio (ICER and ICUR) of PET-ART versus CRT. Model outcomes were obtained from averaging the predictions for 50,000 simulated patients. A probabilistic sensitivity analysis and scenario analyses were carried out. Results: The average incremental costs per patient of PET-ART were €569 (95% confidence interval [CI] €−5327-€6936) for 0.42 incremental LYs (95% CI 0.19-0.61) and 0.33 QALYs gained (95% CI 0.13-0.49). The base-case scenario resulted in an ICER of €1360 per LY gained and an ICUR of €1744 per QALY gained. The probabilistic analysis gave a 36% probability that PET-ART improves health outcomes at reduced costs and a 64% probability that PET-ART is more effective at slightly higher costs. Conclusion: On the basis of the available data, individualized PET-ART for NSCLC seems to be cost-effective compared with CRT.

  15. Smart pixel camera based signal processing in an interferometric test station for massive parallel inspection of MEMS and MOEMS

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Lambelet, Patrick; Røyset, Arne; Kujawińska, Małgorzata; Gastinger, Kay

    2010-09-01

    The paper presents the electro-optical design of an interferometric inspection system for massive parallel inspection of Micro(Opto)ElectroMechanicalSystems (M(O)EMS). The basic idea is to adapt a micro-optical probing wafer to the M(O)EMS wafer under test. The probing wafer is exchangeable and contains a micro-optical interferometer array: a low coherent interferometer (LCI) array based on a Mirau configuration and a laser interferometer (LI) array based on a Twyman-Green configuration. The interference signals are generated in the micro-optical interferometers and are applied for M(O)EMS shape and deformation measurements by means of LCI and for M(O)EMS vibration analysis (the resonance frequency and spatial mode distribution) by means of LI. Distributed array of 5×5 smart pixel imagers detects the interferometric signals. The signal processing is based on the "on pixel" processing capacity of the smart pixel camera array, which can be utilised for phase shifting, signal demodulation or envelope maximum determination. Each micro-interferometer image is detected by the 140 × 146 pixels sub-array distributed in the imaging plane. In the paper the architecture of cameras with smart-pixel approach are described and their application for massive parallel electrooptical detection and data reduction is discussed. The full data processing paths for laser interferometer and low coherent interferometer are presented.

  16. Camera-based measurement for transverse vibrations of moving catenaries in mine hoists using digital image processing techniques

    NASA Astrophysics Data System (ADS)

    Yao, Jiannan; Xiao, Xingming; Liu, Yao

    2016-03-01

    This paper proposes a novel, non-contact, sensing method to measure the transverse vibrations of hoisting catenaries in mine hoists. Hoisting catenaries are typically moving cables and it is not feasible to use traditional methods to measure their transverse vibrations. In order to obtain the transverse displacements of an arbitrary point in a moving catenary, by superposing a mask image having the predefined reference line perpendicular to the hoisting catenaries on each frame of the processed image sequence, the dynamic intersecting points with a grey value of 0 in the image sequence could be identified. Subsequently, by traversing the coordinates of the pixel with a grey value of 0 and calculating the distance between the identified dynamic points from the reference, the transverse displacements of the selected arbitrary point in the hoisting catenary can be obtained. Furthermore, based on a theoretical model, the reasonability and applicability of the proposed camera-based method were confirmed. Additionally, a laboratory experiment was also carried out, which then validated the accuracy of the proposed method. The research results indicate that the proposed camera-based method is suitable for the measurement of the transverse vibrations of moving cables.

  17. Assimilation of PFISR Data Using Support Vector Regression and Ground Based Camera Constraints

    NASA Astrophysics Data System (ADS)

    Clayton, R.; Lynch, K. A.; Nicolls, M. J.; Hampton, D. L.; Michell, R.; Samara, M.; Guinther, J.

    2013-12-01

    In order to best interpret the information gained from multipoint in situ measurements, a Support Vector Regression algorithm is being developed to interpret the data collected from the instruments in the context of ground observations (such as those from camera or radar array). The idea behind SVR is to construct the simplest function that models the data with the least squared error, subject to constraints given by the user. Constraints can be brought into the algorithm from other data sources or from models. As is often the case with data, a perfect solution to such a problem may be impossible, thus 'slack' may be introduced to control how closely the model adheres to the data. The algorithm employs kernels, and chooses radial basis functions as an appropriate kernel. The current SVR code can take input data as one to three dimensional scalars or vectors, and may also include time. External data can be incorporated and assimilated into a model of the environment. Regions of minimal and maximal values are allowed to relax to the sample average (or a user-supplied model) on size and time scales determined by user input, known as feature sizes. These feature sizes can vary for each degree of freedom if the user desires. The user may also select weights for each data point, if it is desirable to weight parts of the data differently. In order to test the algorithm, Poker Flat Incoherent Scatter Radar (PFISR) and MICA sounding rocket data are being used as sample data. The PFISR data consists of many beams, each with multiple ranges. In addition to analyzing the radar data as it stands, the algorithm is being used to simulate data from a localized ionospheric swarm of Cubesats using existing PFISR data. The sample points of the radar at one altitude slice can serve as surrogates for satellites in a cubeswarm. The number of beams of the PFISR radar can then be used to see what the algorithm would output for a swarm of similar size. By using PFISR data in the 15-beam to

  18. The new SCOS-based EGSE of the EPIC flight-spare on-ground cameras

    NASA Astrophysics Data System (ADS)

    La Palombara, Nicola; Abbey, Anthony; Insinga, Fernando; Calderon-Riano, Pedro; Casale, Mauro; Kirsch, Marcus; Martin, James; Munoz, Ramon; Palazzo, Maddalena; Poletti, Mauro; Sembay, Steve; Vallejo, Juan C.; Villa, Gabriele

    2014-07-01

    The XMM-Newton observatory, launched by the European Space Agency in 1999, is still one of the scientific community's most important high-energy astrophysics missions. After almost 15 years in orbit its instruments continue to operate smoothly with a performance close to the immediate post-launch status. The competition for the observing time remains very high with ESA reporting a very healthy over-subscription factor. Due to the efficient use of spacecraft consumables XMM-Newton could potentially be operated into the next decade. However, since the mission was originally planned for 10 years, progressive ageing and/or failures of the on-board instrumentation can be expected. Dealing with them could require substantial changes of the on-board operating software, and of the command and telemetry database, which could potentially have unforeseen consequences for the on-board equipment. In order to avoid this risk, it is essential to test these changes on ground, before their upload. To this aim, two flight-spare cameras of the EPIC experiment (one MOS and one PN) are available on-ground. Originally they were operated through an Electrical Ground Support Equipment (EGSE) system which was developed over 15 years ago to support the test campaigns up to the launch. The EGSE used a specialized command language running on now obsolete workstations. ESA and the EPIC Consortium, therefore, decided to replace it with new equipment in order to fully reproduce on-ground the on-board configuration and to operate the cameras with SCOS2000, the same Mission Control System used by ESA to control the spacecraft. This was a demanding task, since it required both the recovery of the detailed knowledge of the original EGSE and the adjustment of SCOS for this special use. Recently this work has been completed by replacing the EGSE of one of the two cameras, which is now ready to be used by ESA. Here we describe the scope and purpose of this activity, the problems faced during its

  19. Measuring positron-atom binding energies through laser-assisted photorecombination

    NASA Astrophysics Data System (ADS)

    Surko, C. M.; Danielson, J. R.; Gribakin, G. F.; Continetti, R. E.

    2012-06-01

    Described here is a proposed experiment to use laser-assisted photorecombination of positrons from a trap-based beam and metal atoms in the gas phase to measure positron-atom binding energies. Signal rates are estimated, based in part upon experience studying resonant annihilation spectra using a trap-based positron beam.

  20. Plasma image edge detection based on the visible camera in the EAST device.

    PubMed

    Shu, Shuangbao; Xu, Chongyang; Chen, Meiwen; Yang, Zhendong

    2016-01-01

    The controlling of plasma shape and position are essential to the success of Tokamak discharge. A real-time image acquisition system was designed to obtain plasma radiation image during the discharge processes in the Experimental Advanced Superconducting Tokamak (EAST) device. The hardware structure and software design of this visible camera system are introduced in detail. According to the general structure of EAST and the layout of the observation window, spatial location of the discharging plasma in the image was measured. An improved Sobel edge detection algorithm using iterative threshold was proposed to detect plasma boundary. EAST discharge results show that the proposed method acquired plasma position and boundary with high accuracy, which is of great significance for better plasma control.

  1. Design of a smartphone-camera-based fluorescence imaging system for the detection of oral cancer

    NASA Astrophysics Data System (ADS)

    Uthoff, Ross

    Shown is the design of the Smartphone Oral Cancer Detection System (SOCeeDS). The SOCeeDS attaches to a smartphone and utilizes its embedded imaging optics and sensors to capture images of the oral cavity to detect oral cancer. Violet illumination sources excite the oral tissues to induce fluorescence. Images are captured with the smartphone's onboard camera. Areas where the tissues of the oral cavity are darkened signify an absence of fluorescence signal, indicating breakdown in tissue structure brought by precancerous or cancerous conditions. With this data the patient can seek further testing and diagnosis as needed. Proliferation of this device will allow communities with limited access to healthcare professionals a tool to detect cancer in its early stages, increasing the likelihood of cancer reversal.

  2. Formation of the color image based on the vidicon TV camera

    NASA Astrophysics Data System (ADS)

    Iureva, Radda A.; Maltseva, Nadezhda K.; Dunaev, Vadim I.

    2016-09-01

    The main goal of nuclear safety is to protect from accidents in nuclear power plant (NPP) against radiation arising during normal operation of nuclear installations, or as a result of accidents on them. The most important task in any activities aimed at the maintenance of NPP is a constant maintenance of the desired level of security and reliability. The periodic non-destructive testing during operation provides the most relevant criteria for the integrity of the components of the primary circuit pressure. The objective of this study is to develop a system for forming a color image on the television camera on vidicon which is used to conduct non-destructive testing in conditions of increased radiation at NPPs.

  3. Unconstrained face detection and recognition based on RGB-D camera for the visually impaired

    NASA Astrophysics Data System (ADS)

    Zhao, Xiangdong; Wang, Kaiwei; Yang, Kailun; Hu, Weijian

    2017-02-01

    It is highly important for visually impaired people (VIP) to be aware of human beings around themselves, so correctly recognizing people in VIP assisting apparatus provide great convenience. However, in classical face recognition technology, faces used in training and prediction procedures are usually frontal, and the procedures of acquiring face images require subjects to get close to the camera so that frontal face and illumination guaranteed. Meanwhile, labels of faces are defined manually rather than automatically. Most of the time, labels belonging to different classes need to be input one by one. It prevents assisting application for VIP with these constraints in practice. In this article, a face recognition system under unconstrained environment is proposed. Specifically, it doesn't require frontal pose or uniform illumination as required by previous algorithms. The attributes of this work lie in three aspects. First, a real time frontal-face synthesizing enhancement is implemented, and frontal faces help to increase recognition rate, which is proved with experiment results. Secondly, RGB-D camera plays a significant role in our system, from which both color and depth information are utilized to achieve real time face tracking which not only raises the detection rate but also gives an access to label faces automatically. Finally, we propose to use neural networks to train a face recognition system, and Principal Component Analysis (PCA) is applied to pre-refine the input data. This system is expected to provide convenient help for VIP to get familiar with others, and make an access for them to recognize people when the system is trained enough.

  4. Are We Ready for Positron Emission Tomography/Computed Tomography-based Target Volume Definition in Lymphoma Radiation Therapy?

    SciTech Connect

    Yeoh, Kheng-Wei; Mikhaeel, N. George

    2013-01-01

    Fluorine-18 fluorodeoxyglucose (FDG)-positron emission tomography (PET)/computed tomography (CT) has become indispensable for the clinical management of lymphomas. With consistent evidence that it is more accurate than anatomic imaging in the staging and response assessment of many lymphoma subtypes, its utility continues to increase. There have therefore been efforts to incorporate PET/CT data into radiation therapy decision making and in the planning process. Further, there have also been studies investigating target volume definition for radiation therapy using PET/CT data. This article will critically review the literature and ongoing studies on the above topics, examining the value and methods of adding PET/CT data to the radiation therapy treatment algorithm. We will also discuss the various challenges and the areas where more evidence is required.

  5. Phantom experiments on a PSAPD-based compact gamma camera with submillimeter spatial resolution for small animal SPECT

    PubMed Central

    Kim, Sangtaek; McClish, Mickel; Alhassen, Fares; Seo, Youngho; Shah, Kanai S.; Gould, Robert G.

    2010-01-01

    We demonstrate a position sensitive avalanche photodiode (PSAPD) based compact gamma camera for the application of small animal single photon emission computed tomography (SPECT). The silicon PSAPD with a two-dimensional resistive layer and four readout channels is implemented as a gamma ray detector to record the energy and position of radiation events from a radionuclide source. A 2 mm thick monolithic CsI:Tl scintillator is optically coupled to a PSAPD with a 8mm×8mm active area, providing submillimeter intrinsic spatial resolution, high energy resolution (16% full-width half maximum at 140 keV) and high gain. A mouse heart phantom filled with an aqueous solution of 370 MBq 99mTc-pertechnetate (140 keV) was imaged using the PSAPD detector module and a tungsten knife-edge pinhole collimator with a 0.5 mm diameter aperture. The PSAPD detector module was cooled with cold nitrogen gas to suppress dark current shot noise. For each projection image of the mouse heart phantom, a rotated diagonal readout algorithm was used to calculate the position of radiation events and correct for pincushion distortion. The reconstructed image of the mouse heart phantom demonstrated reproducible image quality with submillimeter spatial resolution (0.7 mm), showing the feasibility of using the compact PSAPD-based gamma camera for a small animal SPECT system. PMID:21278833

  6. Trend of digital camera and interchangeable zoom lenses with high ratio based on patent application over the past 10 years

    NASA Astrophysics Data System (ADS)

    Sensui, Takayuki

    2012-10-01

    Although digitalization has tripled consumer-class camera market scale, extreme reductions in prices of fixed-lens cameras has reduced profitability. As a result, a number of manufacturers have entered the market of the System DSC i.e. digital still camera with interchangeable lens, where large profit margins are possible, and many high ratio zoom lenses with image stabilization functions have been released. Quiet actuators are another indispensable component. Design with which there is little degradation in performance due to all types of errors is preferred for good balance in terms of size, lens performance, and the rate of quality to sub-standard products. Decentering, such as that caused by tilting, sensitivity of moving groups is especially important. In addition, image stabilization mechanisms actively shift lens groups. Development of high ratio zoom lenses with vibration reduction mechanism is confronted by the challenge of reduced performance due to decentering, making control over decentering sensitivity between lens groups everything. While there are a number of ways to align lenses (axial alignment), shock resistance and ability to stand up to environmental conditions must also be considered. Naturally, it is very difficult, if not impossible, to make lenses smaller and achieve a low decentering sensitivity at the same time. 4-group zoom construction is beneficial in making lenses smaller, but decentering sensitivity is greater. 5-group zoom configuration makes smaller lenses more difficult, but it enables lower decentering sensitivities. At Nikon, the most advantageous construction is selected for each lens based on specifications. The AF-S DX NIKKOR 18-200mm f/3.5-5.6G ED VR II and AF-S NIKKOR 28-300mm f/3.5-5.6G ED VR are excellent examples of this.

  7. Alternative positron-target design for electron-positron colliders

    SciTech Connect

    Donahue, R.J. ); Nelson, W.R. )

    1991-04-01

    Current electron-positron linear colliders are limited in luminosity by the number of positrons which can be generated from targets presently used. This paper examines the possibility of using an alternate wire-target geometry for the production of positrons via an electron-induced electromagnetic cascade shower. 39 refs., 38 figs., 5 tabs.

  8. Assessment of Tumor Volumes in Skull Base Glomus Tumors Using Gluc-Lys[{sup 18}F]-TOCA Positron Emission Tomography

    SciTech Connect

    Astner, Sabrina T.; Bundschuh, Ralph A.; Beer, Ambros J.; Ziegler, Sibylle I.; Krause, Bernd J.; Schwaiger, Markus; Molls, Michael; Grosu, Anca L.; Essler, Markus

    2009-03-15

    Purpose: To assess a threshold for Gluc-Lys[{sup 18}F]-TOCA positron emission tomography (PET) in target volume delineation of glomus tumors in the skull base and to compare with MRI-based target volume delineation. Methods and Materials: The threshold for volume segmentation in the PET images was determined by a phantom study. Nine patients with a total of 11 glomus tumors underwent PET either with Gluc-Lys[{sup 18}F]-TOCA or with {sup 68}Ga-DOTATOC (in 1 case). All patients were additionally scanned by MRI. Positron emission tomography and MR images were transferred to a treatment-planning system; MR images were analyzed for lesion volume by two observers, and PET images were analyzed by a semiautomated thresholding algorithm. Results: Our phantom study revealed that 32% of the maximum standardized uptake value is an appropriate threshold for tumor segmentation in PET-based target volume delineation of gross tumors. Target volume delineation by MRI was characterized by high interobserver variability. In contrast, interobserver variability was minimal if fused PET/MRI images were used. The gross tumor volumes (GTVs) determined by PET (GTV-PET) showed a statistically significant correlation with the GTVs determined by MRI (GTV-MRI) in primary tumors; in recurrent tumors higher differences were found. The mean GTV-MRI was significantly higher than mean GTV-PET. The increase added by MRI to the common volume was due to scar tissue with strong signal enhancement on MRI. Conclusions: In patients with glomus tumors, Gluc-Lys[{sup 18}F]-TOCA PET helps to reduce interobserver variability if an appropriate threshold for tumor segmentation has been determined for institutional conditions. Especially in patients with recurrent tumors after surgery, Gluc-Lys[{sup 18}F]-TOCA PET improves the accuracy of GTV delineation.

  9. Quantifying the yellow signal driver behavior based on naturalistic data from digital enforcement cameras.

    PubMed

    Bar-Gera, H; Musicant, O; Schechtman, E; Ze'evi, T

    2016-11-01

    The yellow signal driver behavior, reflecting the dilemma zone behavior, is analyzed using naturalistic data from digital enforcement cameras. The key variable in the analysis is the entrance time after the yellow onset, and its distribution. This distribution can assist in determining two critical outcomes: the safety outcome related to red-light-running angle accidents, and the efficiency outcome. The connection to other approaches for evaluating the yellow signal driver behavior is also discussed. The dataset was obtained from 37 digital enforcement cameras at non-urban signalized intersections in Israel, over a period of nearly two years. The data contain more than 200 million vehicle entrances, of which 2.3% (∼5million vehicles) entered the intersection during the yellow phase. In all non-urban signalized intersections in Israel the green phase ends with 3s of flashing green, followed by 3s of yellow. In most non-urban signalized roads in Israel the posted speed limit is 90km/h. Our analysis focuses on crossings during the yellow phase and the first 1.5s of the red phase. The analysis method consists of two stages. In the first stage we tested whether the frequency of crossings is constant at the beginning of the yellow phase. We found that the pattern was stable (i.e., the frequencies were constant) at 18 intersections, nearly stable at 13 intersections and unstable at 6 intersections. In addition to the 6 intersections with unstable patterns, two other outlying intersections were excluded from subsequent analysis. Logistic regression models were fitted for each of the remaining 29 intersection. We examined both standard (exponential) logistic regression and four parameters logistic regression. The results show a clear advantage for the former. The estimated parameters show that the time when the frequency of crossing reduces to half ranges from1.7 to 2.3s after yellow onset. The duration of the reduction of the relative frequency from 0.9 to 0.1 ranged

  10. Satellite camera image navigation

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Savides, John (Inventor); Hanson, Charles W. (Inventor)

    1987-01-01

    Pixels within a satellite camera (1, 2) image are precisely located in terms of latitude and longitude on a celestial body, such as the earth, being imaged. A computer (60) on the earth generates models (40, 50) of the satellite's orbit and attitude, respectively. The orbit model (40) is generated from measurements of stars and landmarks taken by the camera (1, 2), and by range data. The orbit model (40) is an expression of the satellite's latitude and longitude at the subsatellite point, and of the altitude of the satellite, as a function of time, using as coefficients (K) the six Keplerian elements at epoch. The attitude model (50) is based upon star measurements taken by each camera (1, 2). The attitude model (50) is a set of expressions for the deviations in a set of mutually orthogonal reference optical axes (x, y, z) as a function of time, for each camera (1, 2). Measured data is fit into the models (40, 50) using a walking least squares fit algorithm. A transformation computer (66 ) transforms pixel coordinates as telemetered by the camera (1, 2) into earth latitude and longitude coordinates, using the orbit and attitude models (40, 50).

  11. Imaging performance comparison between a LaBr{sub 3}:Ce scintillator based and a CdTe semiconductor based photon counting compact gamma camera

    SciTech Connect

    Russo, P.; Mettivier, G.; Pani, R.; Pellegrini, R.; Cinti, M. N.; Bennati, P.

    2009-04-15

    The authors report on the performance of two small field of view, compact gamma cameras working in single photon counting in planar imaging tests at 122 and 140 keV. The first camera is based on a LaBr{sub 3}:Ce scintillator continuous crystal (49x49x5 mm{sup 3}) assembled with a flat panel multianode photomultiplier tube with parallel readout. The second one belongs to the class of semiconductor hybrid pixel detectors, specifically, a CdTe pixel detector (14x14x1 mm{sup 3}) with 256x256 square pixels and a pitch of 55 {mu}m, read out by a CMOS single photon counting integrated circuit of the Medipix2 series. The scintillation camera was operated with selectable energy window while the CdTe camera was operated with a single low-energy detection threshold of about 20 keV, i.e., without energy discrimination. The detectors were coupled to pinhole or parallel-hole high-resolution collimators. The evaluation of their overall performance in basic imaging tasks is presented through measurements of their detection efficiency, intrinsic spatial resolution, noise, image SNR, and contrast recovery. The scintillation and CdTe cameras showed, respectively, detection efficiencies at 122 keV of 83% and 45%, intrinsic spatial resolutions of 0.9 mm and 75 {mu}m, and total background noises of 40.5 and 1.6 cps. Imaging tests with high-resolution parallel-hole and pinhole collimators are also reported.

  12. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses

    NASA Astrophysics Data System (ADS)

    Fink, Wolfgang; You, Cindy X.; Tarbell, Mark A.

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (μAVS2) for real-time image processing. Truly standalone, μAVS2 is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on μAVS2 operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. μAVS2 imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, μAVS2 affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, μAVS2 can easily be reconfigured for other prosthetic systems. Testing of μAVS2 with actual retinal implant carriers is envisioned in the near future.

  13. Microcomputer-based artificial vision support system for real-time image processing for camera-driven visual prostheses.

    PubMed

    Fink, Wolfgang; You, Cindy X; Tarbell, Mark A

    2010-01-01

    It is difficult to predict exactly what blind subjects with camera-driven visual prostheses (e.g., retinal implants) can perceive. Thus, it is prudent to offer them a wide variety of image processing filters and the capability to engage these filters repeatedly in any user-defined order to enhance their visual perception. To attain true portability, we employ a commercial off-the-shelf battery-powered general purpose Linux microprocessor platform to create the microcomputer-based artificial vision support system (microAVS(2)) for real-time image processing. Truly standalone, microAVS(2) is smaller than a deck of playing cards, lightweight, fast, and equipped with USB, RS-232 and Ethernet interfaces. Image processing filters on microAVS(2) operate in a user-defined linear sequential-loop fashion, resulting in vastly reduced memory and CPU requirements during execution. MiccroAVS(2) imports raw video frames from a USB or IP camera, performs image processing, and issues the processed data over an outbound Internet TCP/IP or RS-232 connection to the visual prosthesis system. Hence, microAVS(2) affords users of current and future visual prostheses independent mobility and the capability to customize the visual perception generated. Additionally, microAVS(2) can easily be reconfigured for other prosthetic systems. Testing of microAVS(2) with actual retinal implant carriers is envisioned in the near future.

  14. Positron sources for Linear Colliders

    SciTech Connect

    Gai Wei; Liu Wanming

    2009-09-02

    Positron beams have many applications and there are many different concepts for positron sources. In this paper, only positron source techniques for linear colliders are covered. In order to achieve high luminosity, a linear collider positron source should have a high beam current, high beam energy, small emittance and, for some applications, a high degree of beam polarization. There are several different schemes presently being developed around the globe. Both the differences between these schemes and their common technical challenges are discussed.

  15. A smart camera based traffic enforcement system: experiences from the field

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Loibner, Gernot

    2013-03-01

    The observation and monitoring of traffic with smart vision systems for the purpose of improving traffic safety has a big potential. Embedded vision systems can count vehicles and estimate the state of traffic along the road, they can supplement or replace loop sensors with their limited local scope, radar which measures the speed, presence and number of vehicles. This work presents a vision system which has been built to detect and report traffic rule violations at unsecured railway crossings which pose a threat to drivers day and night. Our system is designed to detect and record vehicles passing over the railway crossing after the red light has been activated. Sparse optical flow in conjunction with motion clustering is used for real-time motion detection in order to capture these safety critical events. The cameras are activated by an electrical signal from the railway when the red light turns on. If they detect a vehicle moving over the stopping line, and it is well over this limit, an image sequence will be recorded and stored onboard for later evaluation. The system has been designed to be operational in all weather conditions, delivering human-readable license plate images even under the worst illumination conditions like direct incident sunlight direct view into or vehicle headlights. After several months of operation in the field we can report on the performance of the system, its hardware implementation as well as the implementation of algorithms which have proven to be usable in this real-world application.

  16. Unscented Kalman filtering for single camera based motion and shape estimation.

    PubMed

    Jwo, Dah-Jing; Tseng, Chien-Hao; Liu, Jen-Chu; Lee, Hsien-Der

    2011-01-01

    Accurate estimation of the motion and shape of a moving object is a challenging task due to great variety of noises present from sources such as electronic components and the influence of the external environment, etc. To alleviate the noise, the filtering/estimation approach can be used to reduce it in streaming video to obtain better estimation accuracy in feature points on the moving objects. To deal with the filtering problem in the appropriate nonlinear system, the extended Kalman filter (EKF), which neglects higher-order derivatives in the linearization process, has been very popular. The unscented Kalman filter (UKF), which uses a deterministic sampling approach to capture the mean and covariance estimates with a minimal set of sample points, is able to achieve at least the second order accuracy without Jacobians' computation involved. In this paper, the UKF is applied to the rigid body motion and shape dynamics to estimate feature points on moving objects. The performance evaluation is carried out through the numerical study. The results show that UKF demonstrates substantial improvement in accuracy estimation for implementing the estimation of motion and planar surface parameters of a single camera.

  17. A global station coordinate solution based upon camera and laser data - GSFC 1973

    NASA Technical Reports Server (NTRS)

    Marsh, J. G.; Douglas, B. C.; Klosko, S. M.

    1973-01-01

    Results for the geocentric coordinates of 72 globally distributed satellite tracking stations consisting of 58 cameras and 14 lasers are presented. The observational data for this solution consists of over 65,000 optical observations and more than 350 laser passes recorded during the National Geodetic Satellite Program, the 1968 Centre National d'Etudes Spatiales/Smithsonian Astrophysical Observatory (SAO) Program, and International Satellite Geodesy Experiment Program. Dynamic methods were used. The data were analyzed with the GSFC GEM and SAO 1969 Standard Earth Gravity Models. The recent value of GM = 398600.8 cu km/sec square derived at the Jet Propulsion Laboratory (JPL) gave the best results for this combination laser/optical solution. Solutions are made with the deep space solution of JPL (LS-25 solution) including results obtained at GSFC from Mariner-9 Unified B-Band tracking. Datum transformation parameters relating North America, Europe, South America, and Australia are given, enabling the positions of some 200 other tracking stations to be placed in the geocentric system.

  18. A Trajectory and Orientation Reconstruction Method for Moving Objects Based on a Moving Monocular Camera

    PubMed Central

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-01-01

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional “point intersection” to “trajectory intersection” in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable. PMID:25760053

  19. Development of X-ray CCD camera based X-ray micro-CT system

    NASA Astrophysics Data System (ADS)

    Sarkar, Partha S.; Ray, N. K.; Pal, Manoj K.; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y.; Sinha, A.; Gadkari, S. C.

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  20. Development of X-ray CCD camera based X-ray micro-CT system.

    PubMed

    Sarkar, Partha S; Ray, N K; Pal, Manoj K; Baribaddala, Ravi; Agrawal, Ashish; Kashyap, Y; Sinha, A; Gadkari, S C

    2017-02-01

    Availability of microfocus X-ray sources and high resolution X-ray area detectors has made it possible for high resolution microtomography studies to be performed outside the purview of synchrotron. In this paper, we present the work towards the use of an external shutter on a high resolution microtomography system using X-ray CCD camera as a detector. During micro computed tomography experiments, the X-ray source is continuously ON and owing to the readout mechanism of the CCD detector electronics, the detector registers photons reaching it during the read-out period too. This introduces a shadow like pattern in the image known as smear whose direction is defined by the vertical shift register. To resolve this issue, the developed system has been incorporated with a synchronized shutter just in front of the X-ray source. This is positioned in the X-ray beam path during the image readout period and out of the beam path during the image acquisition period. This technique has resulted in improved data quality and hence the same is reflected in the reconstructed images.

  1. Single-camera sequential-scan-based polarization-sensitive SDOCT for retinal imaging.

    PubMed

    Zhao, Mingtao; Izatt, Joseph A

    2009-01-15

    A single-camera, high-speed, polarization-sensitive, spectral-domain optical-coherence-tomography system was developed to measure the polarization properties of the in vivo human retina. A novel phase-unwrapping method in birefringent media is described to extract the total reflectivity, accumulative retardance, and fast-axis orientation from a specially designed sequence of polarization states incident on the sample. A quarter-wave plate was employed to test the performance of the system. The average error and standard deviation of retardation measurements were 3.2 degrees and 2.3 degrees , respectively, and of the fast-axis orientation 1.2 degrees and 0.7 degrees over the range of 0 degrees -180 degrees . The depolarization properties of the retinal pigment epithelium were clearly observed in both retardance and fast-axis orientation image. A normalized standard deviation of the retardance and of the fast-axis orientation is introduced to segment the polarization-scrambling layer of the retinal pigment epithelium.

  2. A trajectory and orientation reconstruction method for moving objects based on a moving monocular camera.

    PubMed

    Zhou, Jian; Shang, Yang; Zhang, Xiaohu; Yu, Wenxian

    2015-03-09

    We propose a monocular trajectory intersection method to solve the problem that a monocular moving camera cannot be used for three-dimensional reconstruction of a moving object point. The necessary and sufficient condition of when this method has the unique solution is provided. An extended application of the method is to not only achieve the reconstruction of the 3D trajectory, but also to capture the orientation of the moving object, which would not be obtained by PnP problem methods due to lack of features. It is a breakthrough improvement that develops the intersection measurement from the traditional "point intersection" to "trajectory intersection" in videometrics. The trajectory of the object point can be obtained by using only linear equations without any initial value or iteration; the orientation of the object with poor conditions can also be calculated. The required condition for the existence of definite solution of this method is derived from equivalence relations of the orders of the moving trajectory equations of the object, which specifies the applicable conditions of the method. Simulation and experimental results show that it not only applies to objects moving along a straight line, or a conic and another simple trajectory, but also provides good result for more complicated trajectories, making it widely applicable.

  3. Research on detecting heterogeneous fibre from cotton based on linear CCD camera

    NASA Astrophysics Data System (ADS)

    Zhang, Xian-bin; Cao, Bing; Zhang, Xin-peng; Shi, Wei

    2009-07-01

    The heterogeneous fibre in cotton make a great impact on production of cotton textile, it will have a bad effect on the quality of product, thereby affect economic benefits and market competitive ability of corporation. So the detecting and eliminating of heterogeneous fibre is particular important to improve machining technics of cotton, advance the quality of cotton textile and reduce production cost. There are favorable market value and future development for this technology. An optical detecting system obtains the widespread application. In this system, we use a linear CCD camera to scan the running cotton, then the video signals are put into computer and processed according to the difference of grayscale, if there is heterogeneous fibre in cotton, the computer will send an order to drive the gas nozzle to eliminate the heterogeneous fibre. In the paper, we adopt monochrome LED array as the new detecting light source, it's lamp flicker, stability of luminous intensity, lumens depreciation and useful life are all superior to fluorescence light. We analyse the reflection spectrum of cotton and various heterogeneous fibre first, then select appropriate frequency of the light source, we finally adopt violet LED array as the new detecting light source. The whole hardware structure and software design are introduced in this paper.

  4. Imaging in laser spectroscopy by a single-pixel camera based on speckle patterns

    NASA Astrophysics Data System (ADS)

    Žídek, K.; Václavík, J.

    2016-11-01

    Compressed sensing (CS) is a branch of computational optics able to reconstruct an image (or any other information) from a reduced number of measurements - thus significantly saving measurement time. It relies on encoding the detected information by a random pattern and consequent mathematical reconstruction. CS can be the enabling step to carry out imaging in many time-consuming measurements. The critical step in CS experiments is the method to invoke encoding by a random mask. Complex devices and relay optics are commonly used for the purpose. We present a new approach of creating the random mask by using laser speckles from coherent laser light passing through a diffusor. This concept is especially powerful in laser spectroscopy, where it does not require any complicated modification of the current techniques. The main advantage consist in the unmatched simplicity of the random pattern generation and a versatility of the pattern resolution. Unlike in the case of commonly used random masks, here the pattern fineness can be adjusted by changing the laser spot size being diffused. We demonstrate the pattern tuning together with the connected changes in the pattern statistics. In particular, the issue of patterns orthogonality, which is important for the CS applications, is discussed. Finally, we demonstrate on a set of 200 acquired speckle patterns that the concept can be successfully employed for single-pixel camera imaging. We discuss requirements on detector noise for the image reconstruction.

  5. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras.

    PubMed

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-08-30

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme.

  6. Target Capturing Control for Space Robots with Unknown Mass Properties: A Self-Tuning Method Based on Gyros and Cameras

    PubMed Central

    Li, Zhenyu; Wang, Bin; Liu, Hong

    2016-01-01

    Satellite capturing with free-floating space robots is still a challenging task due to the non-fixed base and unknown mass property issues. In this paper gyro and eye-in-hand camera data are adopted as an alternative choice for solving this problem. For this improved system, a new modeling approach that reduces the complexity of system control and identification is proposed. With the newly developed model, the space robot is equivalent to a ground-fixed manipulator system. Accordingly, a self-tuning control scheme is applied to handle such a control problem including unknown parameters. To determine the controller parameters, an estimator is designed based on the least-squares technique for identifying the unknown mass properties in real time. The proposed method is tested with a credible 3-dimensional ground verification experimental system, and the experimental results confirm the effectiveness of the proposed control scheme. PMID:27589748

  7. Digital image measurement of specimen deformation based on CCD cameras and Image J software: an application to human pelvic biomechanics

    NASA Astrophysics Data System (ADS)

    Jia, Yongwei; Cheng, Liming; Yu, Guangrong; Lou, Yongjian; Yu, Yan; Chen, Bo; Ding, Zuquan

    2008-03-01

    A method of digital image measurement of specimen deformation based on CCD cameras and Image J software was developed. This method was used to measure the biomechanics behavior of human pelvis. Six cadaveric specimens from the third lumbar vertebra to the proximal 1/3 part of femur were tested. The specimens without any structural abnormalities were dissected of all soft tissue, sparing the hip joint capsules and the ligaments of the pelvic ring and floor. Markers with black dot on white background were affixed to the key regions of the pelvis. Axial loading from the proximal lumbar was applied by MTS in the gradient of 0N to 500N, which simulated the double feet standing stance. The anterior and lateral images of the specimen were obtained through two CCD cameras. Based on Image J software, digital image processing software, which can be freely downloaded from the National Institutes of Health, digital 8-bit images were processed. The procedure includes the recognition of digital marker, image invert, sub-pixel reconstruction, image segmentation, center of mass algorithm based on weighted average of pixel gray values. Vertical displacements of S1 (the first sacral vertebrae) in front view and micro-angular rotation of sacroiliac joint in lateral view were calculated according to the marker movement. The results of digital image measurement showed as following: marker image correlation before and after deformation was excellent. The average correlation coefficient was about 0.983. According to the 768 × 576 pixels image (pixel size 0.68mm × 0.68mm), the precision of the displacement detected in our experiment was about 0.018 pixels and the comparatively error could achieve 1.11\\perthou. The average vertical displacement of S1 of the pelvis was 0.8356+/-0.2830mm under vertical load of 500 Newtons and the average micro-angular rotation of sacroiliac joint in lateral view was 0.584+/-0.221°. The load-displacement curves obtained from our optical measure system

  8. On the localization of positrons in metal vacancies

    NASA Astrophysics Data System (ADS)

    Babich, A. V.; Pogosov, V. V.; Reva, V. I.

    2015-11-01

    The probability of localization of positrons in single vacancies of Al, Cu, and Zn as a function of temperature has been calculated. Vacancy has been simulated by a cavity with a radius of the Wigner-Seitz cell in the stabilized jellium model. A formula for the rate of trapping of a positron by a vacancy as a function of the positron energy has been obtained using the "golden" rule for transitions under the assumption that the positron energy is spent on excitation of electron-hole pairs. The temperature dependence of the localization rate has been calculated for thermalized positrons. It has been found that, in the vicinity of the triple point, the localization rate is close in order of magnitude to the annihilation rate. Based on the results reported in our previous publications devoted to the evaluation of the influence of vacancies on the work function of free positrons, it has been assumed that, near the surface of the metal, there are vacancies charged by positrons. In the approximation of a two-dimensional superlattice, the near-surface vacancy barrier has been estimated. The experimentally revealed shift of the energy distribution of re-emitted positrons has been assumed to be caused by the reflection of low-energy positrons from the vacancy barrier back into the bulk of the metal where they annihilate.

  9. Recent Developments in the Design of the NLC Positron Source

    SciTech Connect

    Kotseroglou, T.; Bharadwaj, V.; Clendenin, J.E.; Ecklund, S,; Frisch, J.; Krejcik, P,; Kukikov, A.V.; Liu, J.; Maruyama, T.; Millage, K.K.; Mulhollan, G.; Nelson, W.R.; Schultz, D.C.; Sheppard, J.C.; Turner, J.; Van Bibber, K.; Flottmann, K.; Namito, Y.

    1999-11-05

    Recent developments in the design of the Next Linear Collider (NLC) positron source based on updated beam parameters are described. The unpolarized NLC positron source [1,2] consists of a dedicated 6.2 GeV S-band electron accelerator, a high-Z positron production target, a capture system and an L-band positron linac. The 1998 failure of the SLC target, which is currently under investigation, may lead to a variation of the target design. Progress towards a polarized positron source is also presented. A moderately polarized positron beam colliding with a highly polarized electron beam results in an effective polarization large enough to explore new physics at NLC. One of the schemes towards a polarized positron source incorporates a polarized electron source, a 50 MeV electron accelerator, a thin target for positron production and a new capture system optimized for high-energy, small angular-divergence positrons. The yield for such a process, checked using the EGS4 code, is of the order of 10{sup -3}. The EGS4 code has being enhanced to include the effect of polarization in bremsstrahlung and pair-production process.

  10. Moisture determination in composite materials using positron lifetime techniques

    NASA Technical Reports Server (NTRS)

    Singh, J. J.; Holt, W. R.; Mock, W., Jr.

    1980-01-01

    A technique was developed which has the potential of providing information on the moisture content as well as its depth in the specimen. This technique was based on the dependence of positron lifetime on the moisture content of the composite specimen. The positron lifetime technique of moisture determination and the results of the initial studies are described.

  11. Positron annihilation processes update

    NASA Technical Reports Server (NTRS)

    Guessoum, Nidhal; Skibo, Jeffrey G.; Ramaty, Reuven

    1997-01-01

    The present knowledge concerning the positron annihilation processes is reviewed, with emphasis on the data of the cross sections of the various processes of interest in astrophysical applications. Recent results are presented including results on reaction rates and line widths, the validity of which is verified.

  12. Positron excitation of neon

    NASA Technical Reports Server (NTRS)

    Parcell, L. A.; Mceachran, R. P.; Stauffer, A. D.

    1990-01-01

    The differential and total cross section for the excitation of the 3s1P10 and 3p1P1 states of neon by positron impact were calculated using a distorted-wave approximation. The results agree well with experimental conclusions.

  13. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    PubMed

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  14. Motion Tracker: Camera-Based Monitoring of Bodily Movements Using Motion Silhouettes

    PubMed Central

    Westlund, Jacqueline Kory; D’Mello, Sidney K.; Olney, Andrew M.

    2015-01-01

    Researchers in the cognitive and affective sciences investigate how thoughts and feelings are reflected in the bodily response systems including peripheral physiology, facial features, and body movements. One specific question along this line of research is how cognition and affect are manifested in the dynamics of general body movements. Progress in this area can be accelerated by inexpensive, non-intrusive, portable, scalable, and easy to calibrate movement tracking systems. Towards this end, this paper presents and validates Motion Tracker, a simple yet effective software program that uses established computer vision techniques to estimate the amount a person moves from a video of the person engaged in a task (available for download from http://jakory.com/motion-tracker/). The system works with any commercially available camera and with existing videos, thereby affording inexpensive, non-intrusive, and potentially portable and scalable estimation of body movement. Strong between-subject correlations were obtained between Motion Tracker’s estimates of movement and body movements recorded from the seat (r =.720) and back (r = .695 for participants with higher back movement) of a chair affixed with pressure-sensors while completing a 32-minute computerized task (Study 1). Within-subject cross-correlations were also strong for both the seat (r =.606) and back (r = .507). In Study 2, between-subject correlations between Motion Tracker’s movement estimates and movements recorded from an accelerometer worn on the wrist were also strong (rs = .801, .679, and .681) while people performed three brief actions (e.g., waving). Finally, in Study 3 the within-subject cross-correlation was high (r = .855) when Motion Tracker’s estimates were correlated with the movement of a person’s head as tracked with a Kinect while the person was seated at a desk (Study 3). Best-practice recommendations, limitations, and planned extensions of the system are discussed. PMID:26086771

  15. High current pulsed positron microprobe

    SciTech Connect

    Howell, R.H.; Stoeffl, W.; Kumar, A.; Sterne, P.A.; Cowan, T.E.; Hartley, J.

    1997-05-01

    We are developing a low energy, microscopically focused, pulsed positron beam for defect analysis by positron lifetime spectroscopy to provide a new defect analysis capability at the 10{sup 10} e{sup +}s{sup -l} beam at the Lawrence Livermore National Laboratory electron linac. When completed, the pulsed positron microprobe will enable defect specific, 3-dimensional maps of defect concentrations with sub-micron resolution of defect location. By coupling these data with first principles calculations of defect specific positron lifetimes and positron implantation profiles we will both map the identity and concentration of defect distributions.

  16. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras

    PubMed Central

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-01-01

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained. PMID:27869731

  17. Detailed measurements and shaping of gate profiles for microchannel-plate-based X-ray framing cameras

    SciTech Connect

    Landen, O.L.; Hammel, B.A.; Bell, P.M.; Abare, A. |; Bradley, D.K. |

    1994-10-03

    Gated, microchannel-plate-based (MCP) framing cameras are increasingly used worldwide for x-ray imaging of subnanosecond laser-plasma phenomena. Large dynamic range (> 1,000) measurements of gain profiles for gated microchannel plates (MCP) are presented. Temporal profiles are reconstructed for any point on the microstrip transmission line from data acquired over many shots with variable delay. No evidence for significant pulse distortion by voltage reflections at the ends of the microstrip is observed. The measured profiles compare well to predictions by a time-dependent discrete dynode model down to the 1% level. The calculations do overestimate the contrast further into the temporal wings. The role of electron transit time dispersion in limiting the minimum achievable gate duration is then investigated by using variable duration flattop gating pulses. A minimum gate duration of 50 ps is achieved with flattop gating, consistent with a fractional transit time spread of {approx} 15%.

  18. Sensor for In-Motion Continuous 3D Shape Measurement Based on Dual Line-Scan Cameras.

    PubMed

    Sun, Bo; Zhu, Jigui; Yang, Linghui; Yang, Shourui; Guo, Yin

    2016-11-18

    The acquisition of three-dimensional surface data plays an increasingly important role in the industrial sector. Numerous 3D shape measurement techniques have been developed. However, there are still limitations and challenges in fast measurement of large-scale objects or high-speed moving objects. The innovative line scan technology opens up new potentialities owing to the ultra-high resolution and line rate. To this end, a sensor for in-motion continuous 3D shape measurement based on dual line-scan cameras is presented. In this paper, the principle and structure of the sensor are investigated. The image matching strategy is addressed and the matching error is analyzed. The sensor has been verified by experiments and high-quality results are obtained.

  19. A method for evaluating image quality of monochrome and color displays based on luminance by use of a commercially available color digital camera

    SciTech Connect

    Tokurei, Shogo E-mail: junjim@med.kyushu-u.ac.jp; Morishita, Junji E-mail: junjim@med.kyushu-u.ac.jp

    2015-08-15

    Purpose: The aim of this study is to propose a method for the quantitative evaluation of image quality of both monochrome and color liquid-crystal displays (LCDs) using a commercially available color digital camera. Methods: The intensities of the unprocessed red (R), green (G), and blue (B) signals of a camera vary depending on the spectral sensitivity of the image sensor used in the camera. For consistent evaluation of image quality for both monochrome and color LCDs, the unprocessed RGB signals of the camera were converted into gray scale signals that corresponded to the luminance of the LCD. Gray scale signals for the monochrome LCD were evaluated by using only the green channel signals of the camera. For the color LCD, the RGB signals of the camera were converted into gray scale signals by employing weighting factors (WFs) for each RGB channel. A line image displayed on the color LCD was simulated on the monochrome LCD by using a software application for subpixel driving in order to verify the WF-based conversion method. Furthermore, the results obtained by different types of commercially available color cameras and a photometric camera were compared to examine the consistency of the authors’ method. Finally, image quality for both the monochrome and color LCDs was assessed by measuring modulation transfer functions (MTFs) and Wiener spectra (WS). Results: The authors’ results demonstrated that the proposed method for calibrating the spectral sensitivity of the camera resulted in a consistent and reliable evaluation of the luminance of monochrome and color LCDs. The MTFs and WS showed different characteristics for the two LCD types owing to difference in the subpixel structure. The MTF in the vertical direction of the color LCD was superior to that of the monochrome LCD, although the WS in the vertical direction of the color LCD was inferior to that of the monochrome LCD as a result of luminance fluctuations in RGB subpixels. Conclusions: The authors

  20. Defects in metals. [Positron annihilation spectroscopy

    SciTech Connect

    Siegel, R.W.

    1982-06-01

    The application of positron annihilation spectroscopy (PAS) to the study of defects in metals has led to increased knowledge on lattice-defect properties during the past decade in two areas: the determination of atomic defect properties, particularly those of monovacancies, and the monitoring and characterization of vacancy-like microstructure development during post-irradiation and post-quench annealing. The study of defects in metals by PAS is reviewed within the context of the other available techniques for defect studies. The strengths and weaknesses of PAS as a method for the characterization of defect microstructures are considered. The additional possibilities for using the positron as a localized probe of the atomic and electronic structures of atomic defects are discussed, based upon theoretical calculations of the annihilation characteristics of defect-trapped positrons and experimental observations. Finally, the present status and future potential of PAS as a tool for the study of defects in metals is considered. 71 references, 9 figures.

  1. Modelisation de photodetecteurs a base de matrices de diodes avalanche monophotoniques pour tomographie d'emission par positrons

    NASA Astrophysics Data System (ADS)

    Corbeil Therrien, Audrey

    La tomographie d'emission par positrons (TEP) est un outil precieux en recherche preclinique et pour le diagnostic medical. Cette technique permet d'obtenir une image quantitative de fonctions metaboliques specifiques par la detection de photons d'annihilation. La detection des ces photons se fait a l'aide de deux composantes. D'abord, un scintillateur convertit l'energie du photon 511 keV en photons du spectre visible. Ensuite, un photodetecteur convertit l'energie lumineuse en signal electrique. Recemment, les photodiodes avalanche monophotoniques (PAMP) disposees en matrice suscitent beaucoup d'interet pour la TEP. Ces matrices forment des detecteurs sensibles, robustes, compacts et avec une resolution en temps hors pair. Ces qualites en font un photodetecteur prometteur pour la TEP, mais il faut optimiser les parametres de la matrice et de l'electronique de lecture afin d'atteindre les performances optimales pour la TEP. L'optimisation de la matrice devient rapidement une operation difficile, car les differents parametres interagissent de maniere complexe avec les processus d'avalanche et de generation de bruit. Enfin, l'electronique de lecture pour les matrices de PAMP demeure encore rudimentaire et il serait profitable d'analyser differentes strategies de lecture. Pour repondre a cette question, la solution la plus economique est d'utiliser un simulateur pour converger vers la configuration donnant les meilleures performances. Les travaux de ce memoire presentent le developpement d'un tel simulateur. Celui-ci modelise le comportement d'une matrice de PAMP en se basant sur les equations de physique des semiconducteurs et des modeles probabilistes. Il inclut les trois principales sources de bruit, soit le bruit thermique, les declenchements intempestifs correles et la diaphonie optique. Le simulateur permet aussi de tester et de comparer de nouvelles approches pour l'electronique de lecture plus adaptees a ce type de detecteur. Au final, le simulateur vise a

  2. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  3. Mars Observer camera

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Danielson, G. E.; Ingersoll, A. P.; Masursky, H.; Veverka, J.; Ravine, M. A.; Soulanille, T. A.

    1992-01-01

    The Mars Observer camera (MOC) is a three-component system (one narrow-angle and two wide-angle cameras) designed to take high spatial resolution pictures of the surface of Mars and to obtain lower spatial resolution, synoptic coverage of the planet's surface and atmosphere. The cameras are based on the 'push broom' technique; that is, they do not take 'frames' but rather build pictures, one line at a time, as the spacecraft moves around the planet in its orbit. MOC is primarily a telescope for taking extremely high resolution pictures of selected locations on Mars. Using the narrow-angle camera, areas ranging from 2.8 km x 2.8 km to 2.8 km x 25.2 km (depending on available internal digital buffer memory) can be photographed at about 1.4 m/pixel. Additionally, lower-resolution pictures (to a lowest resolution of about 11 m/pixel) can be acquired by pixel averaging; these images can be much longer, ranging up to 2.8 x 500 km at 11 m/pixel. High-resolution data will be used to study sediments and sedimentary processes, polar processes and deposits, volcanism, and other geologic/geomorphic processes.

  4. Coherent infrared imaging camera (CIRIC)

    SciTech Connect

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.; Richards, R.K.; Emery, M.S.; Crutcher, R.I.; Sitter, D.N. Jr.; Wachter, E.A.; Huston, M.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerous and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.

  5. Generation of monoenergetic positrons

    SciTech Connect

    Hulett, L.D. Jr.; Dale, J.M.; Miller, P.D. Jr.; Moak, C.D.; Pendyala, S.; Triftshaeuser, W.; Howell, R.H.; Alvarez, R.A.

    1983-01-01

    Many experiments have been performed in the generation and application of monoenergetic positron beams using annealed tungsten moderators and fast sources of /sup 58/Co, /sup 22/Na, /sup 11/C, and LINAC bremstrahlung. This paper will compare the degrees of success from our various approaches. Moderators made from both single crystal and polycrystal tungsten have been tried. Efforts to grow thin films of tungsten to be used as transmission moderators and brightness enhancement devices are in progress.

  6. Effect of {sup 11}C-Methionine-Positron Emission Tomography on Gross Tumor Volume Delineation in Stereotactic Radiotherapy of Skull Base Meningiomas

    SciTech Connect

    Astner, Sabrina T. Dobrei-Ciuchendea, Mihaela; Essler, Markus; Bundschuh, Ralf A.; Sai, Heitetsu; Schwaiger, Markus; Molls, Michael; Weber, Wolfgang A.; Grosu, Anca-Ligia

    2008-11-15

    Purpose: To evaluate the effect of trimodal image fusion using computed tomography (CT), magnetic resonance imaging (MRI) and {sup 11}C-methionine positron emission tomography (MET-PET) for gross tumor volume delineation in fractionated stereotactic radiotherapy of skull base meningiomas. Patients and Methods: In 32 patients with skull base meningiomas, the gross tumor volume (GTV) was outlined on CT scans fused to contrast-enhanced MRI (GTV-MRI/CT). A second GTV, encompassing the MET-PET positive region only (GTV-PET), was generated. The additional information obtained by MET-PET concerning the GTV delineation was evaluated using the PET/CT/MRI co-registered images. The sizes of the overlapping regions of GTV-MRI/CT and GTV-PET were calculated and the amounts of additional volumes added by the complementing modality determined. Results: The addition of MET-PET was beneficial for GTV delineation in all but 3 patients. MET-PET detected small tumor portions with a mean volume of 1.6 {+-} 1.7 cm{sup 3} that were not identified by CT or MRI. The mean percentage of enlargement of the GTV using MET-PET as an additional imaging method was 9.4% {+-} 10.7%. Conclusions: Our data have demonstrated that integration of MET-PET in radiotherapy planning of skull base meningiomas can influence the GTV, possibly resulting in an increase, as well as in a decrease.

  7. Development of a Positron Source for JLab at the IAC

    SciTech Connect

    Forest, Tony

    2013-10-12

    We report on the research performed towards the development of a positron sour for Jefferson Lab's (JLab) Continuous Electron Beam Accelerator Facility (CEBAF) in Newport News, VA. The first year of work was used to benchmark the predictions of our current simulation with positron production efficiency measurements at the IAC. The second year used the benchmarked simulation to design a beam line configuration which optimized positron production efficiency while minimizing radioactive waste as well as design and construct a positron converter target. The final year quantified the performance of the positron source. This joint research and development project brought together the experiences of both electron accelerator facilities. Our intention is to use the project as a spring board towards developing a program of accelerator based research and education which will train students to meet the needs of both facilities as well as provide a pool of trained scientists.

  8. Quantum dot-based image sensors for cutting-edge commercial multispectral cameras

    NASA Astrophysics Data System (ADS)

    Mandelli, Emanuele; Beiley, Zach M.; Kolli, Naveen; Pattantyus-Abraham, Andras G.

    2016-09-01

    This work presents the development of a quantum dot-based photosensitive film engineered to be integrated on standard CMOS process wafers. It enables the design of exceptionally high performance, reliable image sensors. Quantum dot solids absorb light much more rapidly than typical silicon-based photodiodes do, and with the ability to tune the effective material bandgap, quantum dot-based imagers enable higher quantum efficiency over extended spectral bands, both in the Visible and IR regions of the spectrum. Moreover, a quantum dot-based image sensor enables desirable functions such as ultra-small pixels with low crosstalk, high full well capacity, global shutter and wide dynamic range at a relatively low manufacturing cost. At InVisage, we have optimized the manufacturing process flow and are now able to produce high-end image sensors for both visible and NIR in quantity.

  9. Formation of a high intensity low energy positron string

    NASA Astrophysics Data System (ADS)

    Donets, E. D.; Donets, E. E.; Syresin, E. M.; Itahashi, T.; Dubinov, A. E.

    2004-05-01

    The possibility of a high intensity low energy positron beam production is discussed. The proposed Positron String Trap (PST) is based on the principles and technology of the Electron String Ion Source (ESIS) developed in JINR during the last decade. A linear version of ESIS has been used successfully for the production of intense highly charged ion beams of various elements. Now the Tubular Electron String Ion Source (TESIS) concept is under study and this opens really new promising possibilities in physics and technology. In this report, we discuss the application of the tubular-type trap for the storage of positrons cooled to the cryogenic temperatures of 0.05 meV. It is intended that the positron flux at the energy of 1-5 eV, produced by the external source, is injected into the Tubular Positron Trap which has a similar construction as the TESIS. Then the low energy positrons are captured in the PST Penning trap and are cooled down because of their synchrotron radiation in the strong (5-10 T) applied magnetic field. It is expected that the proposed PST should permit storing and cooling to cryogenic temperature of up to 5×109 positrons. The accumulated cooled positrons can be used further for various physics applications, for example, antihydrogen production.

  10. Positron lifetime spectrometer using a DC positron beam

    DOEpatents

    Xu, Jun; Moxom, Jeremy

    2003-10-21

    An entrance grid is positioned in the incident beam path of a DC beam positron lifetime spectrometer. The electrical potential difference between the sample and the entrance grid provides simultaneous acceleration of both the primary positrons and the secondary electrons. The result is a reduction in the time spread induced by the energy distribution of the secondary electrons. In addition, the sample, sample holder, entrance grid, and entrance face of the multichannel plate electron detector assembly are made parallel to each other, and are arranged at a tilt angle to the axis of the positron beam to effectively separate the path of the secondary electrons from the path of the incident positrons.

  11. Positron moderation and detection for positronic atoms

    NASA Astrophysics Data System (ADS)

    Fardad, Abolfazl

    An apparatus is under development for H--+* production, atoms consisting of a positron bound in a Rydberg state to an H-- ion. High energy e+ from radioactive N2211a are slowed (moderated) to eV energies in solid neon and captured in a Penning trap. The procedure to deposit the neon is optimized, resulting in a 1.5% efficiency for moderating high energy e +. Neutral H--+* atoms with ˜100 eV will be produced from these trapped e+ and exit the trap, hitting a metal surface where the e+ annihilates. Back-to-back annihilation gamma photons (Egamma ≈ 0.511 MeV) detected in coincidence, at the expected energy are the fingerprint for H--+* production. A N2211a test source mocks H--+* experiments with ˜2.7% of the e+ emitting disintegrations detected. This high efficiency, with a background rate of ˜2.8 events/hour is achieved by surrounding the detectors with lead and cosmic ray detectors.

  12. Unsupervised Spectral-Spatial Feature Selection-Based Camouflaged Object Detection Using VNIR Hyperspectral Camera

    PubMed Central

    2015-01-01

    The detection of camouflaged objects is important for industrial inspection, medical diagnoses, and military applications. Conventional supervised learning methods for hyperspectral images can be a feasible solution. Such approaches, however, require a priori information of a camouflaged object and background. This letter proposes a fully autonomous feature selection and camouflaged object detection method based on the online analysis of spectral and spatial features. The statistical distance metric can generate candidate feature bands and further analysis of the entropy-based spatial grouping property can trim the useless feature bands. Camouflaged objects can be detected better with less computational complexity by optical spectral-spatial feature analysis. PMID:25879073

  13. Markerless motion tracking of awake animals in positron emission tomography.

    PubMed

    Kyme, Andre; Se, Stephen; Meikle, Steven; Angelis, Georgios; Ryder, Will; Popovic, Kata; Yatigammana, Dylan; Fulton, Roger

    2014-11-01

    Noninvasive functional imaging of awake, unrestrained small animals using motion-compensation removes the need for anesthetics and enables an animal's behavioral response to stimuli or administered drugs to be studied concurrently with imaging. While the feasibility of motion-compensated radiotracer imaging of awake rodents using marker-based optical motion tracking has been shown, markerless motion tracking would avoid the risk of marker detachment, streamline the experimental workflow, and potentially provide more accurate pose estimates over a greater range of motion. We have developed a stereoscopic tracking system which relies on native features on the head to estimate motion. Features are detected and matched across multiple camera views to accumulate a database of head landmarks and pose is estimated based on 3D-2D registration of the landmarks to features in each image. Pose estimates of a taxidermal rat head phantom undergoing realistic rat head motion via robot control had a root mean square error of 0.15 and 1.8 mm using markerless and marker-based motion tracking, respectively. Markerless motion tracking also led to an appreciable reduction in motion artifacts in motion-compensated positron emission tomography imaging of a live, unanesthetized rat. The results suggest that further improvements in live subjects are likely if nonrigid features are discriminated robustly and excluded from the pose estimation process.

  14. Positron Annihilation in Insulating Materials

    SciTech Connect

    Asoka-Kumar, P; Sterne, PA

    2002-10-18

    We describe positron results from a wide range of insulating materials. We have completed positron experiments on a range of zeolite-y samples, KDP crystals, alkali halides and laser damaged SiO{sub 2}. Present theoretical understanding of positron behavior in insulators is incomplete and our combined theoretical and experimental approach is aimed at developing a predictive understanding of positrons and positronium annihilation characteristics in insulators. Results from alkali halides and alkaline-earth halides show that positrons annihilate with only the halide ions, with no apparent contribution from the alkali or alkaline-earth cations. This contradicts the results of our existing theory for metals, which predicts roughly equal annihilation contributions from cation and anion. We also present result obtained using Munich positron microprobe on laser damaged SiO{sub 2} samples.

  15. Camera-based ratiometric fluorescence transduction of nucleic acid hybridization with reagentless signal amplification on a paper-based platform using immobilized quantum dots as donors.

    PubMed

    Noor, M Omair; Krull, Ulrich J

    2014-10-21

    Paper-based diagnostic assays are gaining increasing popularity for their potential application in resource-limited settings and for point-of-care screening. Achievement of high sensitivity with precision and accuracy can be challenging when using paper substrates. Herein, we implement the red-green-blue color palette of a digital camera for quantitative ratiometric transduction of nucleic acid hybridization on a paper-based platform using immobilized quantum dots (QDs) as donors in fluorescence resonance energy transfer (FRET). A nonenzymatic and reagentless means of signal enhancement for QD-FRET assays on paper substrates is based on the use of dry paper substrates for data acquisition. This approach offered at least a 10-fold higher assay sensitivity and at least a 10-fold lower limit of detection (LOD) as compared to hydrated paper substrates. The surface of paper was modified with imidazole groups to assemble a transduction interface that consisted of immobilized QD-probe oligonucleotide conjugates. Green-emitting QDs (gQDs) served as donors with Cy3 as an acceptor. A hybridization event that brought the Cy3 acceptor dye in close proximity to the surface of immobilized gQDs was responsible for a FRET-sensitized emission from the acceptor dye, which served as an analytical signal. A hand-held UV lamp was used as an excitation source and ratiometric analysis using an iPad camera was possible by a relative intensity analysis of the red (Cy3 photoluminescence (PL)) and green (gQD PL) color channels of the digital camera. For digital imaging using an iPad camera, the LOD of the assay in a sandwich format was 450 fmol with a dynamic range spanning 2 orders of magnitude, while an epifluorescence microscope detection platform offered a LOD of 30 fmol and a dynamic range spanning 3 orders of magnitude. The selectivity of the hybridization assay was demonstrated by detection of a single nucleotide polymorphism at a contrast ratio of 60:1. This work provides an

  16. Tumor Delineation Based on Time-Activity Curve Differences Assessed With Dynamic Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography in Rectal Cancer Patients

    SciTech Connect

    Janssen, Marco Aerts, Hugo; Ollers, Michel C.; Bosmans, Geert; Lee, John A.; Buijsen, Jeroen; Ruysscher, Dirk de; Lambin, Philippe; Lammering, Guido; Dekker, Andre L.A.J.

    2009-02-01

    Purpose: To develop an unsupervised tumor delineation method based on time-activity curve (TAC) shape differences between tumor tissue and healthy tissue and to compare the resulting contour with the two tumor contouring methods mostly used nowadays. Methods and Materials: Dynamic positron emission tomography-computed tomography (PET-CT) acquisition was performed for 60 min starting directly after fluorodeoxyglucose (FDG) injection. After acquisition and reconstruction, the data were filtered to attenuate noise. Correction for tissue motion during acquisition was applied. For tumor delineation, the TAC slope values were k-means clustered into two clusters. The resulting tumor contour (Contour I) was compared with a contour manually drawn by the radiation oncologist (Contour II) and a contour generated using a threshold of the maximum standardized uptake value (SUV; Contour III). Results: The tumor volumes of Contours II and III were significantly larger than the tumor volumes of Contour I, with both Contours II and III containing many voxels showing flat TACs at low activities. However, in some cases, Contour II did not cover all voxels showing upward TACs. Conclusion: Both automated SUV contouring and manual tumor delineation possibly incorrectly assign healthy tissue, showing flat TACs, as being malignant. On the other hand, in some cases the manually drawn tumor contours do not cover all voxels showing steep upward TACs, suspected to be malignant. Further research should be conducted to validate the possible superiority of tumor delineation based on dynamic PET analysis.

  17. TEQUILA: NIR camera/spectrograph based on a Rockwell 1024x1024 HgCdTe FPA

    NASA Astrophysics Data System (ADS)

    Ruiz, Elfego; Sohn, Erika; Cruz-Gonzales, Irene; Salas, Luis; Parraga, Antonio; Perez, Manuel; Torres, Roberto; Cobos Duenas, Francisco J.; Gonzalez, Gaston; Langarica, Rosalia; Tejada, Carlos; Sanchez, Beatriz; Iriarte, Arturo; Valdez, J.; Gutierrez, Leonel; Lazo, Francisco; Angeles, Fernando

    1998-08-01

    We describe the configuration and operation modes of the IR camera/spectrograph: TEQUILA based on a 1024 X 1024 HgCdTe FPA. The optical system will allow three possible modes of operation: direct imaging, low and medium resolution spectroscopy and polarimetry. The basic system is being designed to consist of the following: 1) A LN(subscript 2) dewar that allocates the FPA together with the preamplifiers and a 24 filter position cylinder. 2) Control and readout electronics based on DSP modules linked to a workstation through fiber optics. 3) An opto-mechanical assembly cooled to -30 degrees that provides an efficient operation of the instrument in its various modes. 4) A control module for the moving parts of the instrument. The opto-mechanical assembly will have the necessary provision to install a scanning Fabry-Perot interferometer and an adaptive optics correction system. The final image acquisition and control of the whole instrument is carried out in a workstation to provide the observer with a friendly environment. The system will operate at the 2.1 m telescope at the Observatorio Astronomico Nacional in San Pedro Martir, B.C. (Mexico), and is intended to be a first-light instrument for the new 7.8m Mexican IR-Optical Telescope.

  18. Real-time camera-based face detection using a modified LAMSTAR neural network system

    NASA Astrophysics Data System (ADS)

    Girado, Javier I.; Sandin, Daniel J.; DeFanti, Thomas A.; Wolf, Laura K.

    2003-03-01

    This paper describes a cost-effective, real-time (640x480 at 30Hz) upright frontal face detector as part of an ongoing project to develop a video-based, tetherless 3D head position and orientation tracking system. The work is specifically targeted for auto-stereoscopic displays and projection-based virtual reality systems. The proposed face detector is based on a modified LAMSTAR neural network system. At the input stage, after achieving image normalization and equalization, a sub-window analyzes facial features using a neural network. The sub-window is segmented, and each part is fed to a neural network layer consisting of a Kohonen Self-Organizing Map (SOM). The output of the SOM neural networks are interconnected and related by correlation-links, and can hence determine the presence of a face with enough redundancy to provide a high detection rate. To avoid tracking multiple faces simultaneously, the system is initially trained to track only the face centered in a box superimposed on the display. The system is also rotationally and size invariant to a certain degree.

  19. Twenty-one degrees of freedom model based hand pose tracking using a monocular RGB camera

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jong-Il; Park, Hanhoon

    2016-01-01

    It is difficult to visually track a user's hand because of the many degrees of freedom (DOF) a hand has. For this reason, most model-based hand pose tracking methods have relied on the use of multiview images or RGB-D images. This paper proposes a model-based method that accurately tracks three-dimensional hand poses using monocular RGB images in real time. The main idea of the proposed method is to reduce hand tracking ambiguity by adopting a step-by-step estimation scheme consisting of three steps performed in consecutive order: palm pose estimation, finger yaw motion estimation, and finger pitch motion estimation. In addition, this paper proposes highly effective algorithms for each step. With the assumption that a human hand can be considered as an assemblage of articulated planes, the proposed method uses a piece-wise planar hand model which enables hand model regeneration. The hand model regeneration modifies the hand model to fit the current user's hand and improves the accuracy of the hand pose estimation results. Above all, the proposed method can operate in real time using only CPU-based processing. Consequently, it can be applied to various platforms, including egocentric vision devices such as wearable glasses. The results of several experiments conducted verify the efficiency and accuracy of the proposed method.

  20. The DRAGO gamma camera

    SciTech Connect

    Fiorini, C.; Gola, A.; Peloso, R.; Longoni, A.; Lechner, P.; Soltau, H.; Strueder, L.; Ottobrini, L.; Martelli, C.; Lui, R.; Madaschi, L.; Belloli, S.

    2010-04-15

    In this work, we present the results of the experimental characterization of the DRAGO (DRift detector Array-based Gamma camera for Oncology), a detection system developed for high-spatial resolution gamma-ray imaging. This camera is based on a monolithic array of 77 silicon drift detectors (SDDs), with a total active area of 6.7 cm{sup 2}, coupled to a single 5-mm-thick CsI(Tl) scintillator crystal. The use of an array of SDDs provides a high quantum efficiency for the detection of the scintillation light together with a very low electronics noise. A very compact detection module based on the use of integrated readout circuits was developed. The performances achieved in gamma-ray imaging using this camera are reported here. When imaging a 0.2 mm collimated {sup 57}Co source (122 keV) over different points of the active area, a spatial resolution ranging from 0.25 to 0.5 mm was measured. The depth-of-interaction capability of the detector, thanks to the use of a Maximum Likelihood reconstruction algorithm, was also investigated by imaging a collimated beam tilted to an angle of 45 deg. with respect to the scintillator surface. Finally, the imager was characterized with in vivo measurements on mice, in a real preclinical environment.

  1. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  2. TestDose: A nuclear medicine software based on Monte Carlo modeling for generating gamma camera acquisitions and dosimetry

    SciTech Connect

    Garcia, Marie-Paule Villoing, Daphnée; Ferrer, Ludovic; Cremonesi, Marta; Botta, Francesca; Ferrari, Mahila; Bardiès, Manuel

    2015-12-15

    computation performed on the ICRP 110 model is also presented. Conclusions: The proposed platform offers a generic framework to implement any scintigraphic imaging protocols and voxel/organ-based dosimetry computation. Thanks to the modular nature of TestDose, other imaging modalities could be supported in the future such as positron emission tomography.

  3. Positrons observed to originate from thunderstorms

    NASA Astrophysics Data System (ADS)

    Fishman, Gerald J.

    2011-05-01

    Thunderstorms are the result of warm, moist air moving rapidly upward, then cooling and condensing. Electrification occurs within thunderstorms (as noted by Benjamin Franklin), produced primarily by frictional processes among ice particles. This leads to lightning discharges; the types, intensities, and rates of these discharges vary greatly among thunderstorms. Even though scientists have been studying lightning since Franklin's time, new phenomena associated with thunderstorms are still being discovered. In particular, a recent finding by Briggs et al. [2011], based on observations by the Gamma-Ray Burst Monitor (GBM) instrument on NASA's satellite-based Fermi Gamma-ray Space Telescope (Fermi), shows that positrons are also generated by thunderstorms. Positrons are the antimatter form of electrons—they have the same mass and charge as an electron but are of positive rather than negative charge; hence the name positron. Observations of positrons from thunderstorms may lead to a new tool for understanding the electrification and high-energy processes occurring within thunderstorms. New theories, along with new observational techniques, are rapidly evolving in this field.

  4. [The linear hyperspectral camera rotating scan imaging geometric correction based on the precise spectral sampling].

    PubMed

    Wang, Shu-min; Zhang, Ai-wu; Hu, Shao-xing; Wang, Jing-meng; Meng, Xian-gang; Duan, Yi-hao; Sun, Wei-dong

    2015-02-01

    As the rotation speed of ground based hyperspectral imaging system is too fast in the image collection process, which exceeds the speed limitation, there is data missed in the rectified image, it shows as the_black lines. At the same time, there is serious distortion in the collected raw images, which effects the feature information classification and identification. To solve these problems, in this paper, we introduce the each component of the ground based hyperspectral imaging system at first, and give the general process of data collection. The rotation speed is controlled in data collection process, according to the image cover area of each frame and the image collection speed of the ground based hyperspectral imaging system, And then the spatial orientation model is deduced in detail combining with the star scanning angle, stop scanning angle and the minimum distance between the sensor and the scanned object etc. The oriented image is divided into grids and resampled with new spectral. The general flow of distortion image corrected is presented in this paper. Since the image spatial resolution is different between the adjacent frames, and in order to keep the highest image resolution of corrected image, the minimum ground sampling distance is employed as the grid unit to divide the geo-referenced image. Taking the spectral distortion into account caused by direct sampling method when the new uniform grids and the old uneven grids are superimposed to take the pixel value, the precise spectral sampling method based on the position distribution is proposed. The distortion image collected in Lao Si Cheng ruin which is in the Zhang Jiajie town Hunan province is corrected through the algorithm proposed on above. The features keep the original geometric characteristics. It verifies the validity of the algorithm. And we extract the spectral of different features to compute the correlation coefficient. The results show that the improved spectral sampling method is

  5. An Intelligent Automated Door Control System Based on a Smart Camera

    PubMed Central

    Yang, Jie-Ci; Lai, Chin-Lun; Sheu, Hsin-Teng; Chen, Jiann-Jone

    2013-01-01

    This paper presents an innovative access control system, based on human detection and path analysis, to reduce false automatic door system actions while increasing the added values for security applications. The proposed system can first identify a person from the scene, and track his trajectory to predict his intention for accessing the entrance, and finally activate the door accordingly. The experimental results show that the proposed system has the advantages of high precision, safety, reliability, and can be responsive to demands, while preserving the benefits of being low cost and high added value. PMID:23666125

  6. An on-line calibration algorithm for external parameters of visual system based on binocular stereo cameras

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Liu, Zhen; Zhang, Zhonghua

    2014-11-01

    Stereo vision is the key in the visual measurement, robot vision, and autonomous navigation. Before performing the system of stereo vision, it needs to calibrate the intrinsic parameters for each camera and the external parameters of the system. In engineering, the intrinsic parameters remain unchanged after calibrating cameras, and the positional relationship between the cameras could be changed because of vibration, knocks and pressures in the vicinity of the railway or motor workshops. Especially for large baselines, even minute changes in translation or rotation can affect the epipolar geometry and scene triangulation to such a degree that visual system becomes disabled. A technology including both real-time examination and on-line recalibration for the external parameters of stereo system becomes particularly important. This paper presents an on-line method for checking and recalibrating the positional relationship between stereo cameras. In epipolar geometry, the external parameters of cameras can be obtained by factorization of the fundamental matrix. Thus, it offers a method to calculate the external camera parameters without any special targets. If the intrinsic camera parameters are known, the external parameters of system can be calculated via a number of random matched points. The process is: (i) estimating the fundamental matrix via the feature point correspondences; (ii) computing the essential matrix from the fundamental matrix; (iii) obtaining the external parameters by decomposition of the essential matrix. In the step of computing the fundamental matrix, the traditional methods are sensitive to noise and cannot ensure the estimation accuracy. We consider the feature distribution situation in the actual scene images and introduce a regional weighted normalization algorithm to improve accuracy of the fundamental matrix estimation. In contrast to traditional algorithms, experiments on simulated data prove that the method improves estimation

  7. Study on key techniques for camera-based hydrological record image digitization

    NASA Astrophysics Data System (ADS)

    Li, Shijin; Zhan, Di; Hu, Jinlong; Gao, Xiangtao; Bo, Ping

    2015-10-01

    With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification. First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.

  8. A computerized recognition system for the home-based physiotherapy exercises using an RGBD camera.

    PubMed

    Ar, Ilktan; Akgul, Yusuf Sinan

    2014-11-01

    Computerized recognition of the home based physiotherapy exercises has many benefits and it has attracted considerable interest among the computer vision community. However, most methods in the literature view this task as a special case of motion recognition. In contrast, we propose to employ the three main components of a physiotherapy exercise (the motion patterns, the stance knowledge, and the exercise object) as different recognition tasks and embed them separately into the recognition system. The low level information about each component is gathered using machine learning methods. Then, we use a generative Bayesian network to recognize the exercise types by combining the information from these sources at an abstract level, which takes the advantage of domain knowledge for a more robust system. Finally, a novel postprocessing step is employed to estimate the exercise repetitions counts. The performance evaluation of the system is conducted with a new dataset which contains RGB (red, green, and blue) and depth videos of home-based exercise sessions for commonly applied shoulder and knee exercises. The proposed system works without any body-part segmentation, bodypart tracking, joint detection, and temporal segmentation methods. In the end, favorable exercise recognition rates and encouraging results on the estimation of repetition counts are obtained.

  9. Experimental setup for camera-based measurements of electrically and optically stimulated luminescence of silicon solar cells and wafers.

    PubMed

    Hinken, David; Schinke, Carsten; Herlufsen, Sandra; Schmidt, Arne; Bothe, Karsten; Brendel, Rolf

    2011-03-01

    We report in detail on the luminescence imaging setup developed within the last years in our laboratory. In this setup, the luminescence emission of silicon solar cells or silicon wafers is analyzed quantitatively. Charge carriers are excited electrically (electroluminescence) using a power supply for carrier injection or optically (photoluminescence) using a laser as illumination source. The luminescence emission arising from the radiative recombination of the stimulated charge carriers is measured spatially resolved using a camera. We give details of the various components including cameras, optical filters for electro- and photo-luminescence, the semiconductor laser and the four-quadrant power supply. We compare a silicon charged-coupled device (CCD) camera with a back-illuminated silicon CCD camera comprising an electron multiplier gain and a complementary metal oxide semiconductor indium gallium arsenide camera. For the detection of the luminescence emission of silicon we analyze the dominant noise sources along with the signal-to-noise ratio of all three cameras at different operation conditions.

  10. Virtex-II Pro Based Data Processing Unit for Small Spaceborne Camera Instruments

    NASA Astrophysics Data System (ADS)

    Dierker, C.; Fiethe, B.; Michalik, H.; Osterloh, B.; Bubenhagen, F.; Zhou, G.

    2007-08-01

    Individual Data Processing Units (DPUs) are commonly used for operational control and specific data processing of scientific space instruments. To overcome the limitations of traditional rad-hard (RH) or fully commercial design approaches, we show a System-on- Chip (SoC) solution based on a state-of- he-art field programmable gate array (FPGA) with integrated hard- wired processors. Although the design has low resource requirements for both, power and mass, the processing power capabilities are moderate to high. By the shown design, the availability of standard CPUs for general purpose use and programmable logic for special functions in one device allows a very effective partitioning of data processing into hardware and software. High sensor data rates in the order of up to some hundred Mbit/s for advanced sensors can be handled by this approach. Various specific handling methods against radiation induced upsets are used for an efficient design.

  11. Deriving hydraulic roughness from camera-based high resolution topography in field and laboratory experiments

    NASA Astrophysics Data System (ADS)

    Kaiser, Andreas; Neugirg, Fabian; Ebert, Louisa; Haas, Florian; Schmidt, Jürgen; Becht, Michael; Schindewolf, Marcus

    2016-04-01

    The hydraulic roughness, represented by Manning's n, is an essential input parameter in physically based soil erosion modeling. In order to acquire the roughness values for certain areas, on-site flow experiments have to be carried out. These results are influenced by the selection of the location of the test plot and are thereby based on the subjectiveness of the researchers. The study aims on the methodological development to acquire Manning's n by creating very high-resolution surface models with structure-from-motion approaches. Data acquisition took place during several field experiments in the Lainbach valley, southern Germany, and on agricultural sites in Saxony, eastern Germany, and in central Brazil. Rill and interrill conditions were simulated by flow experiments. In order to validate our findings stream velocity as an input for the manning equation was measured with coloured dye. Grain and aggregate sizes were derived by measuring distances from a best fit line to the reconstructed soil surface. Several diameters from D50 to D90 were tested with D90 showing best correlation between tracer experiments and photogrammetrically acquired data. A variety of roughness parameters were tested (standard deviation, random roughness, Garbrecht's n and D90). Best agreement in between the particle size and the hydraulic roughness was achieved with a non-linear sigmoid function and D90 rather than with the Garbrecht equation or statistical parameters. To consolidate these findings a laboratory setup was created to reproduce field data under controlled conditions, excluding unknown influences like infiltration and changes in surface morphology by erosion.

  12. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  13. A Kinect(™) camera based navigation system for percutaneous abdominal puncture.

    PubMed

    Xiao, Deqiang; Luo, Huoling; Jia, Fucang; Zhang, Yanfang; Li, Yong; Guo, Xuejun; Cai, Wei; Fang, Chihua; Fan, Yingfang; Zheng, Huimin; Hu, Qingmao

    2016-08-07

    Percutaneous abdominal puncture is a popular interventional method for the management of abdominal tumors. Image-guided puncture can help interventional radiologists improve targeting accuracy. The second generation of Kinect(™) was released recently, we developed an optical navigation system to investigate its feasibility for guiding percutaneous abdominal puncture, and compare its performance on needle insertion guidance with that of the first-generation Kinect(™). For physical-to-image registration in this system, two surfaces extracted from preoperative CT and intraoperative Kinect(™) depth images were matched using an iterative closest point (ICP) algorithm. A 2D shape image-based correspondence searching algorithm was proposed for generating a close initial position before ICP matching. Evaluation experiments were conducted on an abdominal phantom and six beagles in vivo. For phantom study, a two-factor experiment was designed to evaluate the effect of the operator's skill and trajectory on target positioning error (TPE). A total of 36 needle punctures were tested on a Kinect(™) for Windows version 2 (Kinect(™) V2). The target registration error (TRE), user error, and TPE are 4.26  ±  1.94 mm, 2.92  ±  1.67 mm, and 5.23  ±  2.29 mm, respectively. No statistically significant differences in TPE regarding operator's skill and trajectory are observed. Additionally, a Kinect(™) for Windows version 1 (Kinect(™) V1) was tested with 12 insertions, and the TRE evaluated with the Kinect(™) V1 is statistically significantly larger than that with the Kinect(™) V2. For the animal experiment, fifteen artificial liver tumors were inserted guided by the navigation system. The TPE was evaluated as 6.40  ±  2.72 mm, and its lateral and longitudinal component were 4.30  ±  2.51 mm and 3.80  ±  3.11 mm, respectively. This study demonstrates that the navigation accuracy of the proposed system is

  14. Camera-based microswitch technology for eyelid and mouth responses of persons with profound multiple disabilities: two case studies.

    PubMed

    Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color marks. The person involved in Study I had previously used optic sensors fixed on an eyeglasses' frame for detecting his eyelid- and mouth-opening responses. However, a deterioration of his head posture was making the correct location/use of this frame progressively more difficult. The person involved in Study II had previously been selected for a program relying on eyelid-closure responses and an optic sensor. Such a program however appeared difficult to implement given his sideward lying position and dystonic head movements. The new technology could be satisfactorily applied with both participants using mouth and eyelid opening with the first participant and eyelid closures with the second participant. Both participants had large increases in responding during the intervention periods (i.e., when their responses were followed by preferred stimulation). The findings are discussed in relation to the role of the new technology in helping persons with multiple disabilities and minimal motor behavior.

  15. Positron Emission Tomography (PET)

    DOE R&D Accomplishments Database

    Welch, M. J.

    1990-01-01

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy in PET, and the futures of PET.

  16. Quantum positron acoustic waves

    SciTech Connect

    Metref, Hassina; Tribeche, Mouloud

    2014-12-15

    Nonlinear quantum positron-acoustic (QPA) waves are investigated for the first time, within the theoretical framework of the quantum hydrodynamic model. In the small but finite amplitude limit, both deformed Korteweg-de Vries and generalized Korteweg-de Vries equations governing, respectively, the dynamics of QPA solitary waves and double-layers are derived. Moreover, a full finite amplitude analysis is undertaken, and a numerical integration of the obtained highly nonlinear equations is carried out. The results complement our previously published results on this problem.

  17. Positron Emission Tomography (PET)

    SciTech Connect

    Welch, M.J.

    1990-01-01

    Positron emission tomography (PET) assesses biochemical processes in the living subject, producing images of function rather than form. Using PET, physicians are able to obtain not the anatomical information provided by other medical imaging techniques, but pictures of physiological activity. In metaphoric terms, traditional imaging methods supply a map of the body's roadways, its, anatomy; PET shows the traffic along those paths, its biochemistry. This document discusses the principles of PET, the radiopharmaceuticals in PET, PET research, clinical applications of PET, the cost of PET, training of individuals for PET, the role of the United States Department of Energy in PET, and the futures of PET. 22 figs.

  18. Positron beam lifetime spectroscopy of atomic scale defect distributions in bulk and microscopic volumes

    SciTech Connect

    Howell, R.H.; Cowan, T.E.; Hartley, J.; Sterne, P.; Brown, B.

    1996-05-01

    We are developing a defect analysis capability based on two positron beam lifetime spectrometers: the first is based on a 3 MeV electrostatic accelerator and the second on our high current linac beam. The high energy beam lifetime spectrometer is operational and positron lifetime analysis is performed with a 3 MeV positron beam on thick samples. It is being used for bulk sample analysis and analysis of samples encapsulated in controlled environments for {ital in}{ital situ} measurements. A second, low energy, microscopically focused, pulsed positron beam for defect analysis by positron lifetime spectroscopies is under development at the LLNL high current positron source. This beam will enable defect specific, 3-D maps of defect concentration with sub-micron location resolution and when coupled with first principles calculations of defect specific positron lifetimes it will enable new levels of defect concentration mapping and defect identification.

  19. Positron Emission Tomography of the Heart

    DOE R&D Accomplishments Database

    Schelbert, H. R.; Phelps, M. E.; Kuhl, D. E.

    1979-01-01

    Positron emission computed tomography (PCT) represents an important new tool for the noninvasive evaluation and, more importantly, quantification of myocardial performance. Most currently available techniques permit assessment of only one aspect of cardiac function, i.e., myocardial perfusion by gamma scintillation camera imaging with Thallium-201 or left ventricular function by echocardiography or radionuclide angiocardiography. With PCT it may become possible to study all three major segments of myocardial performance, i.e., regional blood flow, mechanical function and, most importantly, myocardial metabolism. Each of these segments can either be evaluated separately or in combination. This report briefly describes the principles and technological advantages of the imaging device, reviews currently available radioactive tracers and how they can be employed for the assessment of flow, function and metabolism; and, lastly, discusses possible applications of PCT for the study of cardiac physiology or its potential role in the diagnosis of cardiac disease.

  20. Matching the Best Viewing Angle in Depth Cameras for Biomass Estimation Based on Poplar Seedling Geometry

    PubMed Central

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-01-01

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (−45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and −45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  1. Matching the best viewing angle in depth cameras for biomass estimation based on poplar seedling geometry.

    PubMed

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-06-04

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  2. Making relativistic positrons using ultraintense short pulse lasers

    SciTech Connect

    Chen Hui; Wilks, S. C.; Bonlie, J. D.; Chen, S. N.; Cone, K. V.; Elberson, L. N.; Price, D. F.; Schneider, M. B.; Shepherd, R.; Stafford, D. C.; Tommasini, R.; Van Maren, R.; Beiersdorfer, P.; Gregori, G.; Meyerhofer, D. D.; Myatt, J.

    2009-12-15

    This paper describes a new positron source using ultraintense short pulse lasers. Although it has been theoretically studied since the 1970s, the use of lasers as a valuable new positron source was not demonstrated experimentally until recent years, when the petawatt-class short pulse lasers were developed. In 2008 and 2009, in a series of experiments performed at the Lawrence Livermore National Laboratory, a large number of positrons were observed after shooting a millimeter thick solid gold target. Up to 2x10{sup 10} positrons/s ejected at the back of approximately millimeter thick gold targets were detected. The targets were illuminated with short (approx1 ps) ultraintense (approx1x10{sup 20} W/cm{sup 2}) laser pulses. These positrons are produced predominantly by the Bethe-Heitler process and have an effective temperature of 2-4 MeV, with the distribution peaking at 4-7 MeV. The angular distribution of the positrons is anisotropic. For a wide range of applications, this new laser-based positron source with its unique characteristics may complement the existing sources based on radioactive isotopes and accelerators.

  3. Data for modeling of positron collisions and transport in gases

    NASA Astrophysics Data System (ADS)

    Petrović, Z. Lj.; Banković, A.; Dujko, S.; Marjanović, S.; Malović, G.; Sullivan, J. P.; Buckman, S. J.

    2013-07-01

    We review the current status of positron cross sections for collisions with atoms and molecules from the viewpoint of their use in studies of positron transport processes in gases, liquids and human tissue. The data include cross sections for positron scattering in rare gases, molecular gases (eg., for N2, H2, CO2, CF4) and in particular for organic molecules and those relevant for applications in medicine (e.g. formic acid and water vapor). The cross sections were taken from an assessment of previously published positron-target cross sections. All of the cross sections are based on binary collision measurements and theoretical calculations, and they were not explicitly modified according to the standard swarm analysis. The main reason for this is systematic lack of experimental data for positron transport properties in gases. However, we believe that our compiled sets of cross sections are at level of sophistication, and of sufficient accuracy, to provide correct interpretation of future positron-based experiments. Using these cross sections as an input in our Monte Carlo simulations and Boltzmann equation treatment, we review some interesting points observed in the profiles of various transport coefficients for positrons in gases. Particular emphasis is placed upon the analysis of kinetic phenomena generated by the explicit influence of Ps formation.

  4. Resonances in Positron-molecule Interactions

    NASA Astrophysics Data System (ADS)

    Surko, C. M.

    2006-05-01

    The development of cold, trap-based beams has enabled high-resolution, energy-resolved studies of positron scattering and annihilation processes [1]. This talk focuses on three topics in this area. For hydrocarbon molecules such as alkanes (CnH2n+2), giant enhancements in annihilation rates are observed due to vibrational Feshbach resonances. The dependence of the rates on positron energy provides evidence that positrons bind to these molecules and a measure of the binding energies [1]. Recent results include evidence for a second, ``positronically excited'' bound state and new data for the methane series, CH3X, where X is a halogen. Other ``resonance-like features'' are sharp increases in the near-threshold electronic excitation cross sections for CO and N2 [2], and in the vibrational excitation cross sections for CO, CO2 and CH4 [3, 4]. Outstanding questions and the relationship of these observations to available theoretical predictions will be discussed.1. C. M. Surko, G. F. Gribakin, and S. J. Buckman, J. Phys. B 38, R57 (2005).2. J. P. Marler and C. M. Surko, Phys. Rev. A 72, 062713 (2005).3. J. P. Marler and C. M. Surko, Phys. Rev. A 72, 062702 (2005).4. J. P. Marler, G. F. Gribakin and C. M. Surko, Nuclear Instrum. and Meth. B, in press (2006).

  5. Cyclotrons and positron emitting radiopharmaceuticals

    SciTech Connect

    Wolf, A.P.; Fowler, J.S.

    1984-01-01

    The state of the art of Positron Emission Tomography (PET) technology as related to cyclotron use and radiopharmaceutical production is reviewed. The paper discusses available small cyclotrons, the positron emitters which can be produced and the yields possible, target design, and radiopharmaceutical development and application. 97 refs., 12 tabs. (ACR)

  6. Undulator Production of Polarized Positrons

    SciTech Connect

    William M. Bugg

    2008-08-27

    E-166 at SLAC has demonstrated the feasibilty of production of polarized positrons for the International Linear Collider using a helical undulator to produce polarized photons which are converted in a thin target to polarized positrons. The success of the experim ent has resulted in the choice of this technique for the baseline design of ILC.

  7. Modular strategy for the construction of radiometalated antibodies for positron emission tomography based on inverse electron demand Diels-Alder click chemistry.

    PubMed

    Zeglis, Brian M; Mohindra, Priya; Weissmann, Gabriel I; Divilov, Vadim; Hilderbrand, Scott A; Weissleder, Ralph; Lewis, Jason S

    2011-10-19

    A modular system for the construction of radiometalated antibodies was developed based on the bioorthogonal cycloaddition reaction between 3-(4-benzylamino)-1,2,4,5-tetrazine and the strained dienophile norbornene. The well-characterized, HER2-specific antibody trastuzumab and the positron emitting radioisotopes (64)Cu and (89)Zr were employed as a model system. The antibody was first covalently coupled to norbornene, and this stock of norbornene-modified antibody was then reacted with tetrazines bearing the chelators 1,4,7,10-tetraazacyclo-dodecane-1,4,7,10-tetraacetic acid (DOTA) or desferrioxamine (DFO) and subsequently radiometalated with (64)Cu and (89)Zr, respectively. The modification strategy is simple and robust, and the resultant radiometalated constructs were obtained in high specific activity (2.7-5.3 mCi/mg). For a given initial stoichiometric ratio of norbornene to antibody, the (64)Cu-DOTA- and (89)Zr-DFO-based probes were shown to be nearly identical in terms of stability, the number of chelates per antibody, and immunoreactivity (>93% in all cases). In vivo PET imaging and acute biodistribution experiments revealed significant, specific uptake of the (64)Cu- and (89)Zr-trastuzumab bioconjugates in HER2-positive BT-474 xenografts, with little background uptake in HER2-negative MDA-MB-468 xenografts or other tissues. This modular system-one in which the divergent point is a single covalently modified antibody stock that can be reacted selectively with various chelators-will allow for both greater versatility and more facile cross-comparisons in the development of antibody-based radiopharmaceuticals.

  8. Determining Camera Gain in Room Temperature Cameras

    SciTech Connect

    Joshua Cogliati

    2010-12-01

    James R. Janesick provides a method for determining the amplification of a CCD or CMOS camera when only access to the raw images is provided. However, the equation that is provided ignores the contribution of dark current. For CCD or CMOS cameras that are cooled well below room temperature, this is not a problem, however, the technique needs adjustment for use with room temperature cameras. This article describes the adjustment made to the equation, and a test of this method.

  9. High resolution Cerenkov light imaging of induced positron distribution in proton therapy

    SciTech Connect

    Yamamoto, Seiichi Fujii, Kento; Morishita, Yuki; Okumura, Satoshi; Komori, Masataka; Toshito, Toshiyuki

    2014-11-01

    Purpose: In proton therapy, imaging of the positron distribution produced by fragmentation during or soon after proton irradiation is a useful method to monitor the proton range. Although positron emission tomography (PET) is typically used for this imaging, its spatial resolution is limited. Cerenkov light imaging is a new molecular imaging technology that detects the visible photons that are produced from high-speed electrons using a high sensitivity optical camera. Because its inherent spatial resolution is much higher than PET, the authors can measure more precise information of the proton-induced positron distribution with Cerenkov light imaging technology. For this purpose, they conducted Cerenkov light imaging of induced positron distribution in proton therapy. Methods: First, the authors evaluated the spatial resolution of our Cerenkov light imaging system with a {sup 22}Na point source for the actual imaging setup. Then the transparent acrylic phantoms (100 × 100 × 100 mm{sup 3}) were irradiated with two different proton energies using a spot scanning proton therapy system. Cerenkov light imaging of each phantom was conducted using a high sensitivity electron multiplied charge coupled device (EM-CCD) camera. Results: The Cerenkov light’s spatial resolution for the setup was 0.76 ± 0.6 mm FWHM. They obtained high resolution Cerenkov light images of the positron distributions in the phantoms for two different proton energies and made fused images of the reference images and the Cerenkov light images. The depths of the positron distribution in the phantoms from the Cerenkov light images were almost identical to the simulation results. The decay curves derived from the region-of-interests (ROIs) set on the Cerenkov light images revealed that Cerenkov light images can be used for estimating the half-life of the radionuclide components of positrons. Conclusions: High resolution Cerenkov light imaging of proton-induced positron distribution was possible. The

  10. Analytical Study of the Effect of the System Geometry on Photon Sensitivity and Depth of Interaction of Positron Emission Mammography

    PubMed Central

    Aguiar, Pablo; Lois, Cristina

    2012-01-01

    Positron emission mammography (PEM) cameras are novel-dedicated PET systems optimized to image the breast. For these cameras it is essential to achieve an optimum trade-off between sensitivity and spatial resolution and therefore the main challenge for the novel cameras is to improve the sensitivity without degrading the spatial resolution. We carry out an analytical study of the effect of the different detector geometries on the photon sensitivity and the angle of incidence of the detected photons which is related to the DOI effect and therefore to the intrinsic spatial resolution. To this end, dual head detectors were compared to box and different polygon-detector configurations. Our results showed that higher sensitivity and uniformity were found for box and polygon-detector configurations compared to dual-head cameras. Thus, the optimal configuration in terms of sensitivity is a PEM scanner based on a polygon of twelve (dodecagon) or more detectors. We have shown that this configuration is clearly superior to dual-head detectors and slightly higher than box, octagon, and hexagon detectors. Nevertheless, DOI effects are increased for this configuration compared to dual head and box scanners and therefore an accurate compensation for this effect is required. PMID:23049553

  11. Measuring cues for stand-off deception detection based on full-body nonverbal features in body-worn cameras

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Burghouts, Gertjan; den Hollander, Richard; van der Zee, Sophie; Baan, Jan; ten Hove, Johan-Martijn; van Diepen, Sjaak; van den Haak, Paul; van Rest, Jeroen

    2016-10-01

    Deception detection is valuable in the security domain to distinguish truth from lies. It is desirable in many security applications, such as suspect and witness interviews and airport passenger screening. Interviewers are constantly trying to assess the credibility of a statement, usually based on intuition without objective technical support. However, psychological research has shown that humans can hardly perform better than random guessing. Deception detection is a multi-disciplinary research area with an interest from different fields, such as psychology and computer science. In the last decade, several developments have helped to improve the accuracy of lie detection (e.g., with a concealed information test, increasing the cognitive load, or measurements with motion capture suits) and relevant cues have been discovered (e.g., eye blinking or fiddling with the fingers). With an increasing presence of mobile phones and bodycams in society, a mobile, stand-off, automatic deception detection methodology based on various cues from the whole body would create new application opportunities. In this paper, we study the feasibility of measuring these visual cues automatically on different parts of the body, laying the groundwork for stand-off deception detection in more flexible and mobile deployable sensors, such as body-worn cameras. We give an extensive overview of recent developments in two communities: in the behavioral-science community the developments that improve deception detection with a special attention to the observed relevant non-verbal cues, and in the computer-vision community the recent methods that are able to measure these cues. The cues are extracted from several body parts: the eyes, the mouth, the head and the fullbody pose. We performed an experiment using several state-of-the-art video-content-analysis (VCA) techniques to assess the quality of robustly measuring these visual cues.

  12. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    NASA Astrophysics Data System (ADS)

    Harrild, M.; Webley, P.; Dehn, J.

    2014-12-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  13. Change detection and characterization of volcanic activity using ground based low-light and near infrared cameras to monitor incandescence and thermal signatures

    NASA Astrophysics Data System (ADS)

    Harrild, Martin; Webley, Peter; Dehn, Jonathan

    2015-04-01

    Knowledge and understanding of precursory events and thermal signatures are vital for monitoring volcanogenic processes, as activity can often range from low level lava effusion to large explosive eruptions, easily capable of ejecting ash up to aircraft cruise altitudes. Using ground based remote sensing techniques to monitor and detect this activity is essential, but often the required equipment and maintenance is expensive. Our investigation explores the use of low-light cameras to image volcanic activity in the visible to near infrared (NIR) portion of the electromagnetic spectrum. These cameras are ideal for monitoring as they are cheap, consume little power, are easily replaced and can provide near real-time data. We focus here on the early detection of volcanic activity, using automated scripts, that capture streaming online webcam imagery and evaluate image pixel brightness values to determine relative changes and flag increases in activity. The script is written in Python, an open source programming language, to reduce the overall cost to potential consumers and increase the application of these tools across the volcanological community. In addition, by performing laboratory tests to determine the spectral response of these cameras, a direct comparison of collocated low-light and thermal infrared cameras has allowed approximate eruption temperatures and effusion rates to be determined from pixel brightness. The results of a field campaign in June, 2013 to Stromboli volcano, Italy, are also presented here. Future field campaigns to Latin America will include collaborations with INSIVUMEH in Guatemala, to apply our techniques to Fuego and Santiaguito volcanoes.

  14. Ground-based search for the brightest transiting planets with the Multi-site All-Sky CAmeRA: MASCARA

    NASA Astrophysics Data System (ADS)

    Snellen, Ignas A. G.; Stuik, Remko; Navarro, Ramon; Bettonvil, Felix; Kenworthy, Matthew; de Mooij, Ernst; Otten, Gilles; ter Horst, Rik; le Poole, Rudolf

    2012-09-01

    The Multi-site All-sky CAmeRA MASCARA is an instrument concept consisting of several stations across the globe, with each station containing a battery of low-cost cameras to monitor the near-entire sky at each location. Once all stations have been installed, MASCARA will be able to provide a nearly 24-hr coverage of the complete dark sky, down to magnitude 8, at sub-minute cadence. Its purpose is to find the brightest transiting exoplanet systems, expected in the V=4-8 magnitude range - currently not probed by space- or ground-based surveys. The bright/nearby transiting planet systems, which MASCARA will discover, will be the key targets for detailed planet atmosphere observations. We present studies on the initial design of a MASCARA station, including the camera housing, domes, and computer equipment, and on the photometric stability of low-cost cameras showing that a precision of 0.3-1% per hour can be readily achieved. We plan to roll out the first MASCARA station before the end of 2013. A 5-station MASCARA can within two years discover up to a dozen of the brightest transiting planet systems in the sky.

  15. Making Relativistic Positrons Using Ultra-Intense Short Pulse Lasers

    SciTech Connect

    Chen, H; Wilks, S; Bonlie, J; Chen, C; Chen, S; Cone, K; Elberson, L; Gregori, G; Liang, E; Price, D; Van Maren, R; Meyerhofer, D D; Mithen, J; Murphy, C V; Myatt, J; Schneider, M; Shepherd, R; Stafford, D; Tommasini, R; Beiersdorfer, P

    2009-08-24

    This paper describes a new positron source produced using ultra-intense short pulse lasers. Although it has been studied in theory since as early as the 1970s, the use of lasers as a valuable new positron source was not demonstrated experimentally until recent years, when the petawatt-class short pulse lasers were developed. In 2008 and 2009, in a series of experiments performed at Lawrence Livermore National Laboratory, a large number of positrons were observed after shooting a millimeter thick solid gold target. Up to 2 x 10{sup 10} positrons per steradian ejected out the back of {approx}mm thick gold targets were detected. The targets were illuminated with short ({approx}1 ps) ultra-intense ({approx}1 x 10{sup 20} W/cm{sup 2}) laser pulses. These positrons are produced predominantly by the Bethe-Heitler process, and have an effective temperature of 2-4 MeV, with the distribution peaking at 4-7 MeV. The angular distribution of the positrons is anisotropic. For a wide range of applications, this new laser based positron source with its unique characteristics may complements the existing sources using radioactive isotopes and accelerators.

  16. Positron scattering measurements for application to medical physics

    NASA Astrophysics Data System (ADS)

    Sullivan, James

    2015-09-01

    While the use of positrons in medical imaging is now well established, there is still much to learn regarding the transport of positrons through the body, and the subsequent damage induced. Current models of dosimetry use only a crude approximation of the collision physics involved, and at low energies misrepresent the thermalisation process to a considerable degree. Recently, collaborative work has commenced to attempt to refine these models, incorporating a better representation of the underlying physics and trying to gain a better understanding of the damage done after the emission of a positron from a medical radioisotope. This problem is being attacked from several different angles, with new models being developed based upon established techniques in plasma and swarm physics. For all these models, a realistic representation of the collision processes of positrons with relevant molecular species is required. At the Australian National University, we have undertaken a program of measurements of positron scattering from a range of molecules that are important in biological systems, with a focus on analogs to DNA. This talk will present measurements of positron scattering from a range of these molecules, as well as describing the experimental techniques employed to make such measurements. Targets have been measured that are both liquid and solid at room temperature, and new approaches have been developed to get absolute cross section data. The application of the data to various models of positron thermalisation will also be described.

  17. Positron Emission Tomography/Computed Tomography Imaging of Residual Skull Base Chordoma Before Radiotherapy Using Fluoromisonidazole and Fluorodeoxyglucose: Potential Consequences for Dose Painting

    SciTech Connect

    Mammar, Hamid; Kerrou, Khaldoun; Nataf, Valerie; Pontvert, Dominique; Clemenceau, Stephane; Lot, Guillaume; George, Bernard; Polivka, Marc; Mokhtari, Karima; Ferrand, Regis; Feuvret, Loiec; Habrand, Jean-louis; Pouyssegur, Jacques; Mazure, Nathalie; Talbot, Jean-Noeel

    2012-11-01

    Purpose: To detect the presence of hypoxic tissue, which is known to increase the radioresistant phenotype, by its uptake of fluoromisonidazole (18F) (FMISO) using hybrid positron emission tomography/computed tomography (PET/CT) imaging, and to compare it with the glucose-avid tumor tissue imaged with fluorodeoxyglucose (18F) (FDG), in residual postsurgical skull base chordoma scheduled for radiotherapy. Patients and Methods: Seven patients with incompletely resected skull base chordomas were planned for high-dose radiotherapy (dose {>=}70 Gy). All 7 patients underwent FDG and FMISO PET/CT. Images were analyzed qualitatively by visual examination and semiquantitatively by computing the ratio of the maximal standardized uptake value (SUVmax) of the tumor and cerebellum (T/C R), with delineation of lesions on conventional imaging. Results: Of the eight lesion sites imaged with FDG PET/CT, only one was visible, whereas seven of nine lesions were visible on FMISO PET/CT. The median SUVmax in the tumor area was 2.8 g/mL (minimum 2.1; maximum 3.5) for FDG and 0.83 g/mL (minimum 0.3; maximum 1.2) for FMISO. The T/C R values ranged between 0.30 and 0.63 for FDG (median, 0.41) and between 0.75 and 2.20 for FMISO (median,1.59). FMISO T/C R >1 in six lesions suggested the presence of hypoxic tissue. There was no correlation between FMISO and FDG uptake in individual chordomas (r = 0.18, p = 0.7). Conclusion: FMISO PET/CT enables imaging of the hypoxic component in residual chordomas. In the future, it could help to better define boosted volumes for irradiation and to overcome the radioresistance of these lesions. No relationship was founded between hypoxia and glucose metabolism in these tumors after initial surgery.

  18. The calibration of cellphone camera-based colorimetric sensor array and its application in the determination of glucose in urine.

    PubMed

    Jia, Ming-Yan; Wu, Qiong-Shui; Li, Hui; Zhang, Yu; Guan, Ya-Feng; Feng, Liang

    2015-12-15

    In this work, a novel approach that can calibrate the colors obtained with a cellphone camera was proposed for the colorimetric sensor array. The variations of ambient light conditions, imaging positions and even cellphone brands could all be compensated via taking the black and white backgrounds of the sensor array as references, thereby yielding accurate measurements. The proposed calibration approach was successfully applied to the detection of glucose in urine by a colorimetric sensor array. Snapshots of the glucose sensor array by a cellphone camera were calibrated by the proposed compensation method and the urine samples at different glucose concentrations were well discriminated with no confusion after a hierarchical clustering analysis.

  19. Applications and advances of positron beam spectroscopy: appendix a

    SciTech Connect

    Howell, R. H., LLNL

    1997-11-05

    Over 50 scientists from DOE-DP, DOE-ER, the national laboratories, academia and industry attended a workshop held on November 5-7, 1997 at Lawrence Livermore National Laboratory jointly sponsored by the DOE-Division of Materials Science, The Materials Research Institute at LLNL and the University of California Presidents Office. Workshop participants were charged to address two questions: Is there a need for a national center for materials analysis using positron techniques and can the capabilities at Lawrence Livermore National Laboratory serve this need. To demonstrate the need for a national center the workshop participants discussed the technical advantages enabled by high positron currents and advanced measurement techniques, the role that these techniques will play in materials analysis and the demand for the data. There were general discussions lead by review talks on positron analysis techniques, and their applications to problems in semiconductors, polymers and composites, metals and engineering materials, surface analysis and advanced techniques. These were followed by focus sessions on positron analysis opportunities in these same areas. Livermore now leads the world in materials analysis capabilities by positrons due to developments in response to demands of science based stockpile stewardship. There was a detailed discussion of the LLNL capabilities and a tour of the facilities. The Livermore facilities now include the worlds highest current beam of keV positrons, a scanning pulsed positron microprobe under development capable of three dimensional maps of defect size and concentration, an MeV positron beam for defect analysis of large samples, and electron momentum spectroscopy by positrons. This document is a supplement to the written summary report. It contains a complete schedule, list of attendees and the vuegraphs for the presentations in the review and focus sessions.

  20. Positron Emission Tomography Based Elucidation of the Enhanced Permeability and Retention Effect in Dogs with Cancer Using Copper-64 Liposomes.

    PubMed

    Hansen, Anders E; Petersen, Anncatrine L; Henriksen, Jonas R; Boerresen, Betina; Rasmussen, Palle; Elema, Dennis R; af Rosenschöld, Per Munck; Kristensen, Annemarie T; Kjær, Andreas; Andresen, Thomas L

    2015-07-28

    Since the first report of the enhanced permeability and retention (EPR) effect, the research in nanocarrier based antitumor drugs has been intense. The field has been devoted to treatment of cancer by exploiting EPR-based accumulation of nanocarriers in solid tumors, which for many years was considered to be a ubiquitous phenomenon. However, the understanding of differences in the EPR-effect between tumor types, heterogeneities within each patient group, and dependency on tumor development stage in humans is sparse. It is therefore important to enhance our understanding of the EPR-effect in large animals and humans with spontaneously developed cancer. In the present paper, we describe a novel loading method of copper-64 into PEGylated liposomes and use these liposomes to evaluate the EPR-effect in 11 canine cancer patients with spontaneous solid tumors by PET/CT imaging. We thereby provide the first high-resolution analysis of EPR-based tumor accumulation in large animals. We find that the EPR-effect is strong in some tumor types but cannot be considered a general feature of solid malignant tumors since we observed a high degree of accumulation heterogeneity between tumors. Six of seven included carcinomas displayed high uptake levels of liposomes, whereas one of four sarcomas displayed signs of liposome retention. We conclude that nanocarrier-radiotracers could be important in identifying cancer patients that will benefit from nanocarrier-based therapeutics in clinical practice.

  1. LED characterization for development of on-board calibration unit of CCD-based advanced wide-field sensor camera of Resourcesat-2A

    NASA Astrophysics Data System (ADS)

    Chatterjee, Abhijit; Verma, Anurag

    2016-05-01

    The Advanced Wide Field Sensor (AWiFS) camera caters to high temporal resolution requirement of Resourcesat-2A mission with repeativity of 5 days. The AWiFS camera consists of four spectral bands, three in the visible and near IR and one in the short wave infrared. The imaging concept in VNIR bands is based on push broom scanning that uses linear array silicon charge coupled device (CCD) based Focal Plane Array (FPA). On-Board Calibration unit for these CCD based FPAs is used to monitor any degradation in FPA during entire mission life. Four LEDs are operated in constant current mode and 16 different light intensity levels are generated by electronically changing exposure of CCD throughout the calibration cycle. This paper describes experimental setup and characterization results of various flight model visible LEDs (λP=650nm) for development of On-Board Calibration unit of Advanced Wide Field Sensor (AWiFS) camera of RESOURCESAT-2A. Various LED configurations have been studied to meet dynamic range coverage of 6000 pixels silicon CCD based focal plane array from 20% to 60% of saturation during night pass of the satellite to identify degradation of detector elements. The paper also explains comparison of simulation and experimental results of CCD output profile at different LED combinations in constant current mode.

  2. Research on evaluation method of CMOS camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaoqiang; Han, Weiqiang; Cui, Lanfang

    2014-09-01

    In some professional image application fields, we need to test some key parameters of the CMOS camera and evaluate the performance of the device. Aiming at this requirement, this paper proposes a perfect test method to evaluate the CMOS camera. Considering that the CMOS camera has a big fixed pattern noise, the method proposes the `photon transfer curve method' based on pixels to measure the gain and the read noise of the camera. The advantage of this method is that it can effectively wipe out the error brought by the response nonlinearity. Then the reason of photoelectric response nonlinearity of CMOS camera is theoretically analyzed, and the calculation formula of CMOS camera response nonlinearity is deduced. Finally, we use the proposed test method to test the CMOS camera of 2560*2048 pixels. In addition, we analyze the validity and the feasibility of this method.

  3. Ground-based detection of nighttime clouds above Manila Observatory (14.64°N, 121.07°E) using a digital camera.

    PubMed

    Gacal, Glenn Franco B; Antioquia, Carlo; Lagrosas, Nofel

    2016-08-01

    Ground-based cloud detection at nighttime is achieved by using cameras, lidars, and ceilometers. Despite these numerous instruments gathering cloud data, there is still an acknowledged scarcity of information on quantified local cloud cover, especially at nighttime. In this study, a digital camera is used to continuously collect images near the sky zenith at nighttime in an urban environment. An algorithm is developed to analyze pixel values of images of nighttime clouds. A minimum threshold pixel value of 17 is assigned to determine cloud occurrence. The algorithm uses temporal averaging to estimate the cloud fraction based on the results within the limited field of view. The analysis of the data from the months of January, February, and March 2015 shows that cloud occurrence is low during the months with relatively lower minimum temperature (January and February), while cloud occurrence during the warmer month (March) increases.

  4. Magnetic resonance imaging, computed tomography, and 68Ga-DOTATOC positron emission tomography for imaging skull base meningiomas with infracranial extension treated with stereotactic radiotherapy - a case series

    PubMed Central

    2012-01-01

    Introduction Magnetic resonance imaging (MRI) and computed tomography (CT) with 68Ga-DOTATOC positron emission tomography (68Ga-DOTATOC-PET) were compared retrospectively for their ability to delineate infracranial extension of skull base (SB) meningiomas treated with fractionated stereotactic radiotherapy. Methods Fifty patients with 56 meningiomas of the SB underwent MRI, CT, and 68Ga-DOTATOC PET/CT prior to fractionated stereotactic radiotherapy. The study group consisted of 16 patients who had infracranial meningioma extension, visible on MRI ± CT (MRI/CT) or PET, and were evaluated further. The respective findings were reviewed independently, analyzed with respect to correlations, and compared with each other. Results Within the study group, SB transgression was associated with bony changes visible by CT in 14 patients (81%). Tumorous changes of the foramen ovale and rotundum were evident in 13 and 8 cases, respectively, which were accompanied by skeletal muscular invasion in 8 lesions. We analysed six designated anatomical sites of the SB in each of the 16 patients. Of the 96 sites, 42 had infiltration that was delineable by MRI/CT and PET in 35 cases and by PET only in 7 cases. The mean infracranial volume that was delineable in PET was 10.1 ± 10.6 cm3, which was somewhat larger than the volume detectable in MRI/CT (8.4 ± 7.9 cm3). Conclusions 68Ga-DOTATOC-PET allows detection and assessment of the extent of infracranial meningioma invasion. This method seems to be useful for planning fractionated stereotactic radiation when used in addition to conventional imaging modalities that are often inconclusive in the SB region. PMID:22217329

  5. An automated normative-based fluorodeoxyglucose positron emission tomography image-analysis procedure to aid Alzheimer disease diagnosis using statistical parametric mapping and interactive image display

    NASA Astrophysics Data System (ADS)

    Chen, Kewei; Ge, Xiaolin; Yao, Li; Bandy, Dan; Alexander, Gene E.; Prouty, Anita; Burns, Christine; Zhao, Xiaojie; Wen, Xiaotong; Korn, Ronald; Lawson, Michael; Reiman, Eric M.

    2006-03-01

    Having approved fluorodeoxyglucose positron emission tomography (FDG PET) for the diagnosis of Alzheimer's disease (AD) in some patients, the Centers for Medicare and Medicaid Services suggested the need to develop and test analysis techniques to optimize diagnostic accuracy. We developed an automated computer package comparing an individual's FDG PET image to those of a group of normal volunteers. The normal control group includes FDG-PET images from 82 cognitively normal subjects, 61.89+/-5.67 years of age, who were characterized demographically, clinically, neuropsychologically, and by their apolipoprotein E genotype (known to be associated with a differential risk for AD). In addition, AD-affected brain regions functionally defined as based on a previous study (Alexander, et al, Am J Psychiatr, 2002) were also incorporated. Our computer package permits the user to optionally select control subjects, matching the individual patient for gender, age, and educational level. It is fully streamlined to require minimal user intervention. With one mouse click, the program runs automatically, normalizing the individual patient image, setting up a design matrix for comparing the single subject to a group of normal controls, performing the statistics, calculating the glucose reduction overlap index of the patient with the AD-affected brain regions, and displaying the findings in reference to the AD regions. In conclusion, the package automatically contrasts a single patient to a normal subject database using sound statistical procedures. With further validation, this computer package could be a valuable tool to assist physicians in decision making and communicating findings with patients and patient families.

  6. Development of Open source-based automatic shooting and processing UAV imagery for Orthoimage Using Smart Camera UAV

    NASA Astrophysics Data System (ADS)

    Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.

    2016-06-01

    Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.

  7. Neutron Imaging Camera

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley; deNolfo, G. A.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    The Neutron Imaging Camera (NIC) is based on the Three-dimensional Track Imager (3DTI) technology developed at GSFC for gamma-ray astrophysics applications. The 3-DTI, a large volume time-projection chamber, provides accurate, approximately 0.4 mm resolution, 3-D tracking of charged particles. The incident direction of fast neutrons, En > 0.5 MeV, are reconstructed from the momenta and energies of the proton and triton fragments resulting from (sup 3)He(n,p) (sup 3)H interactions in the 3-DTI volume. The performance of the NIC from laboratory and accelerator tests is presented.

  8. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  9. Optimization of attenuation correction for positron emission tomography studies of thorax and pelvis using count-based transmission scans.

    PubMed

    Boellaard, R; van Lingen, A; van Balen, S C M; Lammertsma, A A

    2004-02-21

    The quality of thorax and pelvis transmission scans and therefore of attenuation correction in PET depends on patient thickness and transmission rod source strength. The purpose of the present study was to assess the feasibility of using count-based transmission scans, thereby guaranteeing more consistent image quality and more precise quantification than with fixed transmission scan duration. First, the relation between noise equivalent counts (NEC) of 10 min calibration transmission scans and rod source activity was determined over a period of 1.5 years. Second, the relation between transmission scan counts and uniform phantom diameter was studied numerically, determining the relative contribution of counts from lines of response passing through the phantom as compared with the total number of counts. Finally, the relation between patient weight and transmission scan duration was determined for 35 patients, who were scanned at the level of thorax or pelvis. After installation of new rod sources, the NEC of transmission scans first increased slightly (5%) with decreasing rod source activity and after 3 months decreased with a rate of 2-3% per month. The numerical simulation showed that the number of transmission scan counts from lines of response passing through the phantom increased with phantom diameter up to 7 cm. For phantoms larger than 7 cm, the number of these counts decreased at approximately the same rate as the total number of transmission scan counts. Patient data confirmed that the total number of transmission scan counts decreased with increasing patient weight with about 0.5% kg(-1). It can be concluded that count-based transmission scans compensate for radioactive decay of the rod sources. With count-based transmission scans, rod sources can be used for up to 1.5 years at the cost of a 50% increased transmission scan duration. For phantoms with diameters of more than 7 cm and for patients scanned at the level of thorax or pelvis, use of count-based

  10. NOTE: Optimization of attenuation correction for positron emission tomography studies of thorax and pelvis using count-based transmission scans

    NASA Astrophysics Data System (ADS)

    Boellaard, R.; van Lingen, A.; van Balen, S. C. M.; Lammertsma, A. A.

    2004-02-01

    The quality of thorax and pelvis transmission scans and therefore of attenuation correction in PET depends on patient thickness and transmission rod source strength. The purpose of the present study was to assess the feasibility of using count-based transmission scans, thereby guaranteeing more consistent image quality and more precise quantification than with fixed transmission scan duration. First, the relation between noise equivalent counts (NEC) of 10 min calibration transmission scans and rod source activity was determined over a period of 1.5 years. Second, the relation between transmission scan counts and uniform phantom diameter was studied numerically, determining the relative contribution of counts from lines of response passing through the phantom as compared with the total number of counts. Finally, the relation between patient weight and transmission scan duration was determined for 35 patients, who were scanned at the level of thorax or pelvis. After installation of new rod sources, the NEC of transmission scans first increased slightly (5%) with decreasing rod source activity and after 3 months decreased with a rate of 2 3% per month. The numerical simulation showed that the number of transmission scan counts from lines of response passing through the phantom increased with phantom diameter up to 7 cm. For phantoms larger than 7 cm, the number of these counts decreased at approximately the same rate as the total number of transmission scan counts. Patient data confirmed that the total number of transmission scan counts decreased with increasing patient weight with about 0.5% kg-1. It can be concluded that count-based transmission scans compensate for radioactive decay of the rod sources. With count-based transmission scans, rod sources can be used for up to 1.5 years at the cost of a 50% increased transmission scan duration. For phantoms with diameters of more than 7 cm and for patients scanned at the level of thorax or pelvis, use of count-based

  11. Cosmic Ray Positrons from Pulsars

    NASA Technical Reports Server (NTRS)

    Harding, Alice K.

    2010-01-01

    Pulsars are potential Galactic sources of positrons through pair cascades in their magnetospheres. There are, however, many uncertainties in establishing their contribution to the local primary positron flux. Among these are the local density of pulsars, the cascade pair multiplicities that determine the injection rate of positrons from the pulsar, the acceleration of the injected particles by the pulsar wind termination shock, their rate of escape from the pulsar wind nebula, and their propagation through the interstellar medium. I will discuss these issues in the context of what we are learning from the new Fermi pulsar detections and discoveries.

  12. Imaging Prostate Cancer with Positron Emission Tomography

    DTIC Science & Technology

    2014-07-01

    AD_________________ Award Number: W81XWH-13-1-0125 TITLE: Imaging Prostate Cancer with Positron Emission Tomography...ABOVE ADDRESS. 1. REPORT DATE 2014 2. REPORT TYPE Annual 3. DATES COVERED 01 Sept 2013-31 Aug 2014 4. TITLE AND SUBTITLE Imaging Prostate Cancer ...proposal is to develop peptide based radiopharmaceuticals and evaluate them as PET imaging agents in preclinical animal models of prostate cancer

  13. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  14. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  15. Vacuum Camera Cooler

    NASA Technical Reports Server (NTRS)

    Laugen, Geoffrey A.

    2011-01-01

    Acquiring cheap, moving video was impossible in a vacuum environment, due to camera overheating. This overheating is brought on by the lack of cooling media in vacuum. A water-jacketed camera cooler enclosure machined and assembled from copper plate and tube has been developed. The camera cooler (see figure) is cup-shaped and cooled by circulating water or nitrogen gas through copper tubing. The camera, a store-bought "spy type," is not designed to work in a vacuum. With some modifications the unit can be thermally connected when mounted in the cup portion of the camera cooler. The thermal conductivity is provided by copper tape between parts of the camera and the cooled enclosure. During initial testing of the demonstration unit, the camera cooler kept the CPU (central processing unit) of this video camera at operating temperature. This development allowed video recording of an in-progress test, within a vacuum environment.

  16. A Spartan3E-based low-cost system for gamma-ray detection in small single photon emission computed tomography or positron emission tomography systems

    NASA Astrophysics Data System (ADS)

    Fysikopoulos, E.; Georgiou, M.; Efthimiou, N.; David, S.; Loudos, G.; Matsopoulos, G.

    2011-11-01

    The development and assessment of a readout system based on field programmable gate arrays (FPGA) for dedicated nuclear medicine cameras is presented. We have used Xilinx Spartan3E starter kit, which is one of the simplest FPGA evaluation boards. The aim of this work is to offer a simple, open source, data acquisition tool, which provides accurate results for nuclear imaging applications. The system has been evaluated using three different experimental setups: pulses from two position-sensitive photo-multipliers (PSPMTs) and a silicon photo-multiplier (SiPM) were recorded, using 99mTc sources. Two dual channel, external, 12 bit analog to digital converters with a sampling rate of 1 Msps per channel were used. The tool was designed using Xilinx's embedded development kit and was based in Xilinx's Microblaze soft-core processor. A reference multiparameter-based data acquisition system using nuclear instrumentation modules was used for the evaluation of the proposed system. A number of tests were carried out to assess different algorithms for pulse maximum estimation and Gaussian fitting provided optimal results. The results have shown that the FPGA data acquisition system (i) provides accurate digitization of the PSPMT anode signals under various conditions and (ii) gives similar energy spectra when SiPMs are used.

  17. Improved generalized gradient approximation for positron states in solids

    NASA Astrophysics Data System (ADS)

    Kuriplach, Jan; Barbiellini, Bernardo

    2014-04-01

    Several first-principles calculations of positron-annihilation characteristics in solids have added gradient corrections to the local-density approximation within the theory by Arponen and Pajanne [Ann. Phys. (NY) 121, 343 (1979), 10.1016/0003-4916(79)90101-5] since this theory systematically overestimates the annihilation rates. As a further remedy, we propose to use gradient corrections for other local-density approximation schemes based on perturbed hypernetted chain and on quantum Monte Carlo results. Our calculations for various metals and semiconductors show that the proposed schemes generally improve the positron lifetimes when they are confronted with experiment. We also compare the resulting positron affinities in solids with data from slow-positron measurements.

  18. Those Nifty Digital Cameras!<