Science.gov

Sample records for adaptive coded aperture

  1. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  2. A Programmable Liquid Collimator for Both Coded Aperture Adaptive Imaging and Multiplexed Compton Scatter Tomography

    DTIC Science & Technology

    2012-03-01

    Assessment of COMSCAN, A Compton Backscatter Imaging Camera , for the One-Sided Non-Destructive Inspection of Aerospace Compo- nents. Technical report...A PROGRAMMABLE LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Jack G. M. FitzGerald, 2d...LIQUID COLLIMATOR FOR BOTH CODED APERTURE ADAPTIVE IMAGING AND MULTIPLEXED COMPTON SCATTER TOMOGRAPHY THESIS Presented to the Faculty Department of

  3. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  4. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  5. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  6. Memristor fabrication and characterization: an adaptive coded aperture imaging and sensing opportunity

    NASA Astrophysics Data System (ADS)

    Yakopcic, Chris; Taha, Tarek M.; Shin, Eunsung; Subramanyam, Guru; Murray, P. Terrence; Rogers, Stanley

    2010-08-01

    The memristor, experimentally verified for the first time in 2008, is one of four fundamental passive circuit elements (the others being resistors, capacitors, and inductors). Development and characterization of memristor devices and the design of novel computing architectures based on these devices can potentially provide significant advances in intelligence processing systems for a variety of applications including image processing, robotics, and machine learning. In particular, adaptive coded aperture (diffraction) sensing, an emerging technology enabling real-time, wide-area IR/visible sensing and imaging, could benefit from new high performance biologically inspired image processing architectures based on memristors. In this paper, we present results from the fabrication and characterization of memristor devices utilizing titanium oxide dielectric layers in a parallel plate conuration. Two versions of memristor devices have been fabricated at the University of Dayton and the Air Force Research Laboratory utilizing varying thicknesses of the TiO2 dielectric layers. Our results show that the devices do exhibit the characteristic hysteresis loop in their I-V plots.

  7. Utilizing Microelectromechanical Systems (MEMS) Micro-Shutter Designs for Adaptive Coded Aperture Imaging (ACAI) Technologies

    DTIC Science & Technology

    2009-03-01

    and Fraunhofer , or far-field diffraction . For Fraunhofer diffraction to exist, the following relationship must be satisfied at the observation...316 μm from the aperture, Fraunhofer diffraction will be observed, meaning that when a light wave passes through one aperture to a designated pixel... diffraction could then become a limiting factor. The Huygens-Fresnel Principle states simply that if the wavelength is large compared to the aperture, the

  8. Adaptive aperture synthesis

    NASA Astrophysics Data System (ADS)

    Johnson, A. M.; Zhang, S.; Mudassar, A.; Love, G. D.; Greenaway, A. H.

    2005-12-01

    High-resolution imaging can be achieved by optical aperture synthesis (OAS). Such an imaging process is subject to aberrations introduced by instrumental defects and/or turbulent media. Redundant spacings calibration (RSC) is a snapshot calibration technique that can be used to calibrate OAS arrays without use of assumptions about the object being imaged. Here we investigate the analogies between RSC and adaptive optics in passive imaging applications.

  9. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  10. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  11. Sparse-aperture adaptive optics

    NASA Astrophysics Data System (ADS)

    Tuthill, Peter; Lloyd, James; Ireland, Michael; Martinache, Frantz; Monnier, John; Woodruff, Henry; ten Brummelaar, Theo; Turner, Nils; Townes, Charles

    2006-06-01

    Aperture masking interferometry and Adaptive Optics (AO) are two of the competing technologies attempting to recover diffraction-limited performance from ground-based telescopes. However, there are good arguments that these techniques should be viewed as complementary, not competitive. Masking has been shown to deliver superior PSF calibration, rejection of atmospheric noise and robust recovery of phase information through the use of closure phases. However, this comes at the penalty of loss of flux at the mask, restricting the technique to bright targets. Adaptive optics, on the other hand, can reach a fainter class of objects but suffers from the difficulty of calibration of the PSF which can vary with observational parameters such as seeing, airmass and source brightness. Here we present results from a fusion of these two techniques: placing an aperture mask downstream of an AO system. The precision characterization of the PSF enabled by sparse-aperture interferometry can now be applied to deconvolution of AO images, recovering structure from the traditionally-difficult regime within the core of the AO-corrected transfer function. Results of this program from the Palomar and Keck adaptive optical systems are presented.

  12. Class of near-perfect coded apertures

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.

  13. Coded-aperture imaging in nuclear medicine

    NASA Technical Reports Server (NTRS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-01-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  14. Coded-aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-11-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  15. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  16. Rank minimization code aperture design for spectrally selective compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2013-03-01

    A new code aperture design framework for multiframe code aperture snapshot spectral imaging (CASSI) system is presented. It aims at the optimization of code aperture sets such that a group of compressive spectral measurements is constructed, each with information from a specific subset of bands. A matrix representation of CASSI is introduced that permits the optimization of spectrally selective code aperture sets. Furthermore, each code aperture set forms a matrix such that rank minimization is used to reduce the number of CASSI shots needed. Conditions for the code apertures are identified such that a restricted isometry property in the CASSI compressive measurements is satisfied with higher probability. Simulations show higher quality of spectral image reconstruction than that attained by systems using Hadamard or random code aperture sets.

  17. Multi-shot compressed coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Du, Juan; Wu, Tengfei; Jin, Zhenhua

    2013-09-01

    The classical methods of compressed coded aperture (CCA) still require an optical sensor with high resolution, although the sampling rate has broken the Nyquist sampling rate already. A novel architecture of multi-shot compressed coded aperture imaging (MCCAI) using a low resolution optical sensor is proposed, which is mainly based on the 4-f imaging system, combining with two spatial light modulators (SLM) to achieve the compressive imaging goal. The first SLM employed for random convolution is placed at the frequency spectrum plane of the 4-f imaging system, while the second SLM worked as a selecting filter is positioned in front of the optical sensor. By altering the random coded pattern of the second SLM and sampling, a couple of observations can be obtained by a low resolution optical sensor easily, and these observations will be combined mathematically and used to reconstruct the high resolution image. That is to say, MCCAI aims at realizing the super resolution imaging with multiple random samplings by using a low resolution optical sensor. To improve the computational imaging performance, total variation (TV) regularization is introduced into the super resolution reconstruction model to get rid of the artifacts, and alternating direction method of multipliers (ADM) is utilized to solve the optimal result efficiently. The results show that the MCCAI architecture is suitable for super resolution computational imaging using a much lower resolution optical sensor than traditional CCA imaging methods by capturing multiple frame images.

  18. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  19. Coded aperture imaging for fluorescent x-rays

    SciTech Connect

    Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.

    2014-06-15

    We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.

  20. Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications

    SciTech Connect

    Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth

    2013-06-01

    Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.

  1. Large aperture adaptive optics for intense lasers

    NASA Astrophysics Data System (ADS)

    Deneuville, François; Ropert, Laurent; Sauvageot, Paul; Theis, Sébastien

    2015-05-01

    ISP SYSTEM has developed a range of large aperture electro-mechanical deformable mirrors (DM) suitable for ultra short pulsed intense lasers. The design of the MD-AME deformable mirror is based on force application on numerous locations thanks to electromechanical actuators driven by stepper motors. DM design and assembly method have been adapted to large aperture beams and the performances were evaluated on a first application for a beam with a diameter of 250mm at 45° angle of incidence. A Strehl ratio above 0.9 was reached for this application. Simulations were correlated with measurements on optical bench and the design has been validated by calculation for very large aperture (up to Ø550mm). Optical aberrations up to Zernike order 5 can be corrected with a very low residual error as for actual MD-AME mirror. Amplitude can reach up to several hundreds of μm for low order corrections. Hysteresis is lower than 0.1% and linearity better than 99%. Contrary to piezo-electric actuators, the μ-AME actuators avoid print-through effects and they permit to keep the mirror shape stable even unpowered, providing a high resistance to electro-magnetic pulses. The MD-AME mirrors can be adapted to circular, square or elliptical beams and they are compatible with all dielectric or metallic coatings.

  2. Development of large aperture composite adaptive optics

    NASA Astrophysics Data System (ADS)

    Kmetik, Viliam; Vitovec, Bohumil; Jiran, Lukas; Nemcova, Sarka; Zicha, Josef; Inneman, Adolf; Mikulickova, Lenka; Pavlica, Richard

    2015-01-01

    Large aperture composite adaptive optics for laser applications is investigated in cooperation of Institute of Plasma Physic, Department of Instrumentation and Control Engineering FME CTU and 5M Ltd. We are exploring opportunity of a large-size high-power-laser deformable-mirror production using a lightweight bimorph actuated structure with a composite core. In order to produce a sufficiently large operational free aperture we are developing new technologies for production of flexible core, bimorph actuator and deformable mirror reflector. Full simulation of a deformable-mirrors structure was prepared and validated by complex testing. A deformable mirror actuation and a response of a complicated structure are investigated for an accurate control of the adaptive optics. An original adaptive optics control system and a bimorph deformable mirror driver were developed. Tests of material samples, components and sub-assemblies were completed. A subscale 120 mm bimorph deformable mirror prototype was designed, fabricated and thoroughly tested. A large-size 300 mm composite-core bimorph deformable mirror was simulated and optimized, fabrication of a prototype is carried on. A measurement and testing facility is modified to accommodate large sizes optics.

  3. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  4. Aperture codes for sensors viewing extended objects from space

    NASA Technical Reports Server (NTRS)

    Curtis, C. C.

    1992-01-01

    The paper describes aperture codes viewing extended objects form space which find application in imaging an extended object which may have relatively low contrast, and whose lateral limits extend beyond the FOV of the sensor. Those elements of an extended object lying near the FOV limits are only partially coded, i.e., flux from those elements cannot cast a shadow of the entire aperture code onto the detector, as can elements near the center of the FOV. This has consequences for the algorithms used to reconstruct the image. The object field is divided into a number of elements which is smaller than the number of detector pixels, and a least squares fit to the data is performed. The methods used for choosing the matrices representing the aperture codes are discussed, and computer simulations of the effects of noise are described.

  5. X-ray scatter tomography using coded apertures

    NASA Astrophysics Data System (ADS)

    MacCabe, Kenneth P.

    This work proposes and studies a new field of x-ray tomography which combines the principles of scatter imaging and coded apertures, termed "coded aperture x-ray scatter imaging" (CAXSI). Conventional x-ray tomography reconstructs an object's electron density distribution by measuring a set of line integrals known as the x-ray transform, based physically on the attenuation of incident rays. More recently, scatter imaging has emerged as an alternative to attenuation imaging by measuring radiation from coherent and incoherent scattering. The information-rich scatter signal may be used to infer density as well as molecular structure throughout a volume. Some scatter modalities use collimators at the source and detector, resulting in long scan times due to the low efficiency of scattering mechanisms combined with a high degree of spatial filtering. CAXSI comes to the rescue by employing coded apertures. Coded apertures transmit a larger fraction of the scattered rays than collimators while also imposing structure to the scatter signal. In a coded aperture system each detector is sensitive to multiple ray paths, producing multiplexed measurements. The coding problem is then to design an aperture which enables de-multiplexing to reconstruct the desired physical properties and spatial distribution of the target. In this work, a number of CAXSI systems are proposed, analyzed, and demonstrated. One-dimensional "pencil" beams, two-dimensional "fan" beams, and three-dimensional "cone" beams are considered for the illumination. Pencil beam and fan beam CAXSI systems are demonstrated experimentally. The utility of energy-integrating (scintillation) detectors and energy-sensitive (photon counting) detectors are evaluated theoretically, and new coded aperture designs are presented for each beam geometry. Physical models are developed for each coded aperture system, from which resolution metrics are derived. Systems employing different combinations of beam geometry, coded

  6. Comparative noise performance of a coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Piper, Jonathan; Yuen, Peter; Godfree, Peter; Ding, Mengjia; Soori, Umair; Selvagumar, Senthurran; James, David

    2016-10-01

    Novel types of spectral sensors using coded apertures may offer various advantages over conventional designs, especially the possibility of compressive measurements that could exceed the expected spatial, temporal or spectral resolution of the system. However, the nature of the measurement process imposes certain limitations, especially on the noise performance of the sensor. This paper considers a particular type of coded-aperture spectral imager and uses analytical and numerical modelling to compare its expected noise performance with conventional hyperspectral sensors. It is shown that conventional sensors may have an advantage in conditions where signal levels are high, such as bright light or slow scanning, but that coded-aperture sensors may be advantageous in low-signal conditions.

  7. Design of wavefront coding optical system with annular aperture

    NASA Astrophysics Data System (ADS)

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-10-01

    Wavefront coding can extend the depth of field of traditional optical system by inserting a phase mask into the pupil plane. In this paper, the point spread function (PSF) of wavefront coding system with annular aperture are analyzed. Stationary phase method and fast Fourier transform (FFT) method are used to compute the diffraction integral respectively. The OTF invariance is analyzed for the annular aperture with cubic phase mask under different obscuration ratio. With these analysis results, a wavefront coding system using Maksutov-Cassegrain configuration is designed finally. It is an F/8.21 catadioptric system with annular aperture, and its focal length is 821mm. The strength of the cubic phase mask is optimized with user-defined operand in Zemax. The Wiener filtering algorithm is used to restore the images and the numerical simulation proves the validity of the design.

  8. Adaptive SPECT imaging with crossed-slit apertures

    PubMed Central

    Durko, Heather L.; Furenlid, Lars R.

    2015-01-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the progression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code. PMID:26190884

  9. Adaptive SPECT imaging with crossed-slit apertures

    NASA Astrophysics Data System (ADS)

    Durko, Heather L.; Furenlid, Lars R.

    2014-09-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the pro-gression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code.

  10. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    NASA Astrophysics Data System (ADS)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  11. Evaluation of coded aperture radiation detectors using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  12. Coded-Aperture Transaxial Tomography Using Modular Gamma Cameras

    NASA Astrophysics Data System (ADS)

    Roney, Timothy Joseph

    Imaging in nuclear medicine involves the injection of a radioactive tracer into the body and subsequent detection of the radiation emanating from an organ of interest. Single -photon emission computed tomography (SPECT) is the branch of nuclear medicine that yields three-dimensional maps of the distribution of a tracer, most commonly as a series of two-dimensional slices. One major drawback to transaxial tomographic imaging in SPECT today is the rotation required of a gamma camera to collect the tomographic data set. Transaxial SPECT usually involves a large, single-crystal scintillation camera and an aperture (collimator) that together only satisfy a small portion of the spatial sampling requirements simultaneously. It would be very desirable to have a stationary data-collection apparatus that allows all spatial sampling in the data set to occur simultaneously. Aperture or detector motion (or both) is merely an inconvenience in most imaging situations where the patient is stationary. However, aperture or detector motion (or both) enormously complicate the prospect of tomograhically recording dynamic events, such as the beating heart, with radioactive pharmaceuticals. By substituting a set of small modular detectors for the large single-crystal detector, we can arrange the usable detector area in such a way as to collect all spatial samples simultaneously. The modular detectors allow for the possibility of using other types of stationary apertures. We demonstrate the capabilities of one such aperture, the pinhole array. The pinhole array is one of many kinds of collimators known as coded apertures. Coded apertures differ from conventional apertures in nuclear medicine in that they allow for overlapping projections of the object on the detector. Although overlapping projections is not a requirement when using pinhole arrays, there are potential benefits in terms of collection efficiency. There are also potential drawbacks in terms of the position uncertainty of

  13. Depth estimation from multiple coded apertures for 3D interaction

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Choi, Changkyu; Park, Dusik

    2013-09-01

    In this paper, we propose a novel depth estimation method from multiple coded apertures for 3D interaction. A flat panel display is transformed into lens-less multi-view cameras which consist of multiple coded apertures. The sensor panel behind the display captures the scene in front of the display through the imaging pattern of the modified uniformly redundant arrays (MURA) on the display panel. To estimate the depth of an object in the scene, we first generate a stack of synthetically refocused images at various distances by using the shifting and averaging approach for the captured coded images. And then, an initial depth map is obtained by applying a focus operator to a stack of the refocused images for each pixel. Finally, the depth is refined by fitting a parametric focus model to the response curves near the initial depth estimates. To demonstrate the effectiveness of the proposed algorithm, we construct an imaging system to capture the scene in front of the display. The system consists of a display screen and an x-ray detector without a scintillator layer so as to act as a visible sensor panel. Experimental results confirm that the proposed method accurately determines the depth of an object including a human hand in front of the display by capturing multiple MURA coded images, generating refocused images at different depth levels, and refining the initial depth estimates.

  14. An Adaptive Homomorphic Aperture Photometry Algorithm for Merging Galaxies

    NASA Astrophysics Data System (ADS)

    Huang, J. C.; Hwang, C. Y.

    2017-03-01

    We present a novel automatic adaptive aperture photometry algorithm for measuring the total magnitudes of merging galaxies with irregular shapes. First, we use a morphological pattern recognition routine for identifying the shape of an irregular source in a background-subtracted image. Then, we extend the shape of the source by using the Dilation image operation to obtain an aperture that is quasi-homomorphic to the shape of the irregular source. The magnitude measured from the homomorphic aperture would thus have minimal contamination from the nearby background. As a test of our algorithm, we applied our technique to the merging galaxies observed by the Sloan Digital Sky Survey and the Canada–France–Hawaii Telescope. Our results suggest that the adaptive homomorphic aperture algorithm can be very useful for investigating extended sources with irregular shapes and sources in crowded regions.

  15. Monte-Carlo simulation of a coded aperture SPECT apparatus using uniformly redundant arrays

    NASA Astrophysics Data System (ADS)

    Gemmill, Paul E.; Chaney, Roy C.; Fenyves, Ervin J.

    1995-09-01

    Coded apertures are used in tomographic imaging systems to improve the signal-to-noise ratio (SNR) of the apparatus with a larger aperture transmissions area while maintaining the spatial resolution of the single pinhole. Coded apertures developed from uniformly redundant arrays (URA) have an aperture transmission area of slightly over one half of the total aperture. Computer simulations show that the spatial resolution of a SPECT apparatus using a URA generated coded aperture compared favorably with theoretical expectations and has a SNR that is approximately 3.5 to 4 times that of a single pinhole camera for a variety of cases.

  16. Hybrid coded aperture and Compton imaging using an active mask

    NASA Astrophysics Data System (ADS)

    Schultz, L. J.; Wallace, M. S.; Galassi, M. C.; Hoover, A. S.; Mocko, M.; Palmer, D. M.; Tornga, S. R.; Kippen, R. M.; Hynes, M. V.; Toolin, M. J.; Harris, B.; McElroy, J. E.; Wakeford, D.; Lanza, R. C.; Horn, B. K. P.; Wehe, D. K.

    2009-09-01

    The trimodal imager (TMI) images gamma-ray sources from a mobile platform using both coded aperture (CA) and Compton imaging (CI) modalities. In this paper we will discuss development and performance of image reconstruction algorithms for the TMI. In order to develop algorithms in parallel with detector hardware we are using a GEANT4 [J. Allison, K. Amako, J. Apostolakis, H. Araujo, P.A. Dubois, M. Asai, G. Barrand, R. Capra, S. Chauvie, R. Chytracek, G. Cirrone, G. Cooperman, G. Cosmo, G. Cuttone, G. Daquino, et al., IEEE Trans. Nucl. Sci. NS-53 (1) (2006) 270] based simulation package to produce realistic data sets for code development. The simulation code incorporates detailed detector modeling, contributions from natural background radiation, and validation of simulation results against measured data. Maximum likelihood algorithms for both imaging methods are discussed, as well as a hybrid imaging algorithm wherein CA and CI information is fused to generate a higher fidelity reconstruction.

  17. Terahertz coded aperture mask using vanadium dioxide bowtie antenna array

    NASA Astrophysics Data System (ADS)

    Nadri, Souheil; Percy, Rebecca; Kittiwatanakul, Lin; Arsenovic, Alex; Lu, Jiwei; Wolf, Stu; Weikle, Robert M.

    2014-09-01

    Terahertz imaging systems have received substantial attention from the scientific community for their use in astronomy, spectroscopy, plasma diagnostics and security. One approach to designing such systems is to use focal plane arrays. Although the principle of these systems is straightforward, realizing practical architectures has proven deceptively difficult. A different approach to imaging consists of spatially encoding the incoming flux of electromagnetic energy prior to detection using a reconfigurable mask. This technique is referred to as "coded aperture" or "Hadamard" imaging. This paper details the design, fabrication and testing of a prototype coded aperture mask operating at WR-1.5 (500-750 GHz) that uses the switching properties of vanadium dioxide(VO2). The reconfigurable mask consists of bowtie antennas with vanadium dioxide VO2 elements at the feed points. From the symmetry, a unit cell of the array can be represented by an equivalent waveguide whose dimensions limit the maximum operating frequency. In this design, the cutoff frequency of the unit cell is 640 GHz. The VO2 devices are grown using reactive-biased target ion beam deposition. A reflection coefficient (S11) measurement of the mask in the WR-1.5 (500-750 GHz) band is conducted. The results are compared with circuit models and found to be in good agreement. A simulation of the transmission response of the mask is conducted and shows a transmission modulation of up to 28 dB. This project is a first step towards the development of a full coded aperture imaging system operating at WR-1.5 with VO2 as the mask switching element.

  18. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  19. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    diameter of focal blur for clear aperture number n m C (x, y ) Laplacian of Gaussian for image over x and y n/a F(2p, e) Fourier transform of image in...polar coordinates Ap and e n/a F (x) Fourier transform of x n/a it Focal length of lens m J; (p,e) Image in polar coordinates p and e m g...captures a Fourier transform of each image at various angles rather than low resolution images [38]. Multiple coded images have also been used, with

  20. Development of the strontium iodide coded aperture (SICA) instrument

    NASA Astrophysics Data System (ADS)

    Mitchell, Lee J.; Phlips, Bernard F.; Grove, J. Eric; Cordes, Ryan

    2015-08-01

    The work reports on the development of a Strontium Iodide Coded Aperture (SICA) instrument for use in space-based astrophysics, solar physics, and high-energy atmospheric physics. The Naval Research Laboratory is developing a prototype coded aperture imager that will consist of an 8 x 8 array of SrI2:Eu detectors, each read out by a silicon photomultiplier. The array would be used to demonstrate SrI2:Eu detector performance for space-based missions. Europium-doped strontium iodide (SrI2:Eu) detectors have recently become available, and the material is a strong candidate to replace existing detector technology currently used for space-based gamma-ray astrophysics research. The detectors have a typical energy resolution of 3.2% at 662 keV, a significant improvement over the 6.5% energy resolution of thallium-doped sodium iodide. With a density of 4.59 g/cm and a Zeff of 49, SrI2:Eu has a high efficiency for MeV gamma-ray detection. Coupling this with recent improvements in silicon photomultiplier technology (i.e., no bulky photomultiplier tubes) creates high-density, large-area, low-power detector arrays with good energy resolution. Also, the energy resolution of SrI2:Eu makes it ideal for use as the back plane of a Compton telescope.

  1. Coded aperture Fast Neutron Analysis: Latest design advances

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto; Lanza, Richard C.

    2001-07-01

    Past studies have showed that materials of concern like explosives or narcotics can be identified in bulk from their atomic composition. Fast Neutron Analysis (FNA) is a nuclear method capable of providing this information even when considerable penetration is needed. Unfortunately, the cross sections of the nuclear phenomena and the solid angles involved are typically small, so that it is difficult to obtain high signal-to-noise ratios in short inspection times. CAFNAaims at combining the compound specificity of FNA with the potentially high SNR of Coded Apertures, an imaging method successfully used in far-field 2D applications. The transition to a near-field, 3D and high-energy problem prevents a straightforward application of Coded Apertures and demands a thorough optimization of the system. In this paper, the considerations involved in the design of a practical CAFNA system for contraband inspection, its conclusions, and an estimate of the performance of such a system are presented as the evolution of the ideas presented in previous expositions of the CAFNA concept.

  2. Coded-aperture Raman imaging for standoff explosive detection

    NASA Astrophysics Data System (ADS)

    McCain, Scott T.; Guenther, B. D.; Brady, David J.; Krishnamurthy, Kalyani; Willett, Rebecca

    2012-06-01

    This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to 200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering. A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full system design will be presented along with initial data from the instrument. Estimates for area scanning rates and chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228 nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified solar-blind CCD camera, and a high-efficiency collection optic.

  3. Adaptive Full Aperture Wavefront Sensor Study

    NASA Technical Reports Server (NTRS)

    Robinson, William G.

    1997-01-01

    This grant and the work described was in support of a Seven Segment Demonstrator (SSD) and review of wavefront sensing techniques proposed by the Government and Contractors for the Next Generation Space Telescope (NGST) Program. A team developed the SSD concept. For completeness, some of the information included in this report has also been included in the final report of a follow-on contract (H-27657D) entitled "Construction of Prototype Lightweight Mirrors". The original purpose of this GTRI study was to investigate how various wavefront sensing techniques might be most effectively employed with large (greater than 10 meter) aperture space based telescopes used for commercial and scientific purposes. However, due to changes in the scope of the work performed on this grant and in light of the initial studies completed for the NGST program, only a portion of this report addresses wavefront sensing techniques. The wavefront sensing techniques proposed by the Government and Contractors for the NGST were summarized in proposals and briefing materials developed by three study teams including NASA Goddard Space Flight Center, TRW, and Lockheed-Martin. In this report, GTRI reviews these approaches and makes recommendations concerning the approaches. The objectives of the SSD were to demonstrate functionality and performance of a seven segment prototype array of hexagonal mirrors and supporting electromechanical components which address design issues critical to space optics deployed in large space based telescopes for astronomy and for optics used in spaced based optical communications systems. The SSD was intended to demonstrate technologies which can support the following capabilities: Transportation in dense packaging to existing launcher payload envelopes, then deployable on orbit to form a space telescope with large aperture. Provide very large (greater than 10 meters) primary reflectors of low mass and cost. Demonstrate the capability to form a segmented primary or

  4. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian [Livermore, CA; Vetter, Kai M [Alameda, CA

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  5. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  6. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    SciTech Connect

    Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  7. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  8. Event localization in bulk scintillator crystals using coded apertures

    NASA Astrophysics Data System (ADS)

    Ziock, K. P.; Braverman, J. B.; Fabris, L.; Harrison, M. J.; Hornback, D.; Newby, J.

    2015-06-01

    The localization of radiation interactions in bulk scintillators is generally limited by the size of the light distribution at the readout surface of the crystal/light-pipe system. By finding the centroid of the light spot, which is typically of order centimeters across, practical single-event localization is limited to 2 mm/cm of crystal thickness. Similar resolution can also be achieved for the depth of interaction by measuring the size of the light spot. Through the use of near-field coded-aperture techniques applied to the scintillation light, light transport simulations show that for 3-cm-thick crystals, more than a five-fold improvement (millimeter spatial resolution) can be achieved both laterally and in event depth. At the core of the technique is the requirement to resolve the shadow from an optical mask placed in the scintillation light path between the crystal and the readout. In this paper, experimental results are presented that demonstrate the overall concept using a 1D shadow mask, a thin-scintillator crystal and a light pipe of varying thickness to emulate a 2.2-cm-thick crystal. Spatial resolutions of 1 mm in both depth and transverse to the readout face are obtained over most of the crystal depth.

  9. Event Localization in Bulk Scintillator Crystals Using Coded Apertures

    SciTech Connect

    Ziock, Klaus-Peter; Braverman, Joshua B.; Fabris, Lorenzo; Harrison, Mark J.; Hornback, Donald Eric; Newby, Jason

    2015-06-01

    The localization of radiation interactions in bulk scintillators is generally limited by the size of the light distribution at the readout surface of the crystal/light-pipe system. By finding the centroid of the light spot, which is typically of order centimeters across, practical single-event localization is limited to ~2 mm/cm of crystal thickness. Similar resolution can also be achieved for the depth of interaction by measuring the size of the light spot. Through the use of near-field coded-aperture techniques applied to the scintillation light, light transport simulations show that for 3-cm-thick crystals, more than a five-fold improvement (millimeter spatial resolution) can be achieved both laterally and in event depth. At the core of the technique is the requirement to resolve the shadow from an optical mask placed in the scintillation light path between the crystal and the readout. In this paper, experimental results are presented that demonstrate the overall concept using a 1D shadow mask, a thin-scintillator crystal and a light pipe of varying thickness to emulate a 2.2-cm-thick crystal. Spatial resolutions of ~ 1 mm in both depth and transverse to the readout face are obtained over most of the crystal depth.

  10. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  11. Analysis for simplified optics coma effection on spectral image inversion of coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Liu, Yangyang; Lv, Qunbo; Li, Weiyan; Xiangli, Bin

    2015-09-01

    As a novel spectrum imaging technology was developed recent years, push-broom coded aperture spectral imaging (PCASI) has the advantages of high throughput, high SNR, high stability etc. This coded aperture spectral imaging utilizes fixed code templates and push-broom mode, which can realize the high-precision reconstruction of spatial and spectral information. But during optical lens designing, manufacturing and debugging, it is inevitably exist some minor coma errors. Even minor coma errors can reduce image quality. In this paper, we simulated the system optical coma error's influence to the quality of reconstructed image, analyzed the variant of the coded aperture in different optical coma effect, then proposed an accurate curve of image quality and optical coma quality in 255×255 size code template, which provide important references for design and development of push-broom coded aperture spectrometer.

  12. A dual-sided coded-aperture radiation detection system

    NASA Astrophysics Data System (ADS)

    Penny, R. D.; Hood, W. E.; Polichar, R. M.; Cardone, F. H.; Chavez, L. G.; Grubbs, S. G.; Huntley, B. P.; Kuharski, R. A.; Shyffer, R. T.; Fabris, L.; Ziock, K. P.; Labov, S. E.; Nelson, K.

    2011-10-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5×5×50 cm 3 cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  13. The Protoexist2 Advanced CZT Coded Aperture Telescope

    NASA Astrophysics Data System (ADS)

    Allen, Branden; Hong, J.; Grindlay, J.; Barthelmy, S.; Baker, R.

    2011-09-01

    The ProtoEXIST program was conceived for the development of a scalable detector plane architecture utilizing pixilated CdZnTe (CZT) detectors for eventual deployment in a large scale (1-4 m2 active area) coded aperture X-ray telescope for use as a wide field ( 90° × 70° FOV) all sky monitor and survey instrument for the 5 up to 600 keV energy band. The first phase of the program recently concluded with the successful 6 hour high altitude (39 km) flight of ProtoEXIST1, which utilized a closely tiled 8 × 8 array of 20 mm × 20 mm, 5 mm thick Redlen CZT crystals each bonded to a RadNET asic via an interposer board. Each individual CZT crystal utilized a 8 × 8 pixilated anode for the creation of a position sensitive detector with 2.5 mm spatial resolution. Development of ProtoEXIST2, the second advanced CZT detector plane in this series, is currently under way. ProtoEXIST2 will be composed of a closely tiled 8 × 8 array of 20 mm × 20 mm, 5 mm thick Redlen CZT crystals, similar to ProtoEXIST1, but will now utilize the Nu-ASIC which accommodates the direct bonding of CZT detectors with a 32 × 32 pixilated anode with a 604.8 μm pixel pitch. Characterization and performance of the ProtoEXIST2 detectors is discussed as well as current progress in the integration of the ProtoEXIST2 detector plane.

  14. Direct aperture optimization for online adaptive radiation therapy

    SciTech Connect

    Mestrovic, Ante; Milette, Marie-Pierre; Nichol, Alan; Clark, Brenda G.; Otto, Karl

    2007-05-15

    This paper is the first investigation of using direct aperture optimization (DAO) for online adaptive radiation therapy (ART). A geometrical model representing the anatomy of a typical prostate case was created. To simulate interfractional deformations, four different anatomical deformations were created by systematically deforming the original anatomy by various amounts (0.25, 0.50, 0.75, and 1.00 cm). We describe a series of techniques where the original treatment plan was adapted in order to correct for the deterioration of dose distribution quality caused by the anatomical deformations. We found that the average time needed to adapt the original plan to arrive at a clinically acceptable plan is roughly half of the time needed for a complete plan regeneration, for all four anatomical deformations. Furthermore, through modification of the DAO algorithm the optimization search space was reduced and the plan adaptation was significantly accelerated. For the first anatomical deformation (0.25 cm), the plan adaptation was six times more efficient than the complete plan regeneration. For the 0.50 and 0.75 cm deformations, the optimization efficiency was increased by a factor of roughly 3 compared to the complete plan regeneration. However, for the anatomical deformation of 1.00 cm, the reduction of the optimization search space during plan adaptation did not result in any efficiency improvement over the original (nonmodified) plan adaptation. The anatomical deformation of 1.00 cm demonstrates the limit of this approach. We propose an innovative approach to online ART in which the plan adaptation and radiation delivery are merged together and performed concurrently--adaptive radiation delivery (ARD). A fundamental advantage of ARD is the fact that radiation delivery can start almost immediately after image acquisition and evaluation. Most of the original plan adaptation is done during the radiation delivery, so the time spent adapting the original plan does not

  15. Snapshot 2D tomography via coded aperture x-ray scatter imaging

    PubMed Central

    MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.

    2015-01-01

    This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254

  16. Coded aperture solution for improving the performance of traffic enforcement cameras

    NASA Astrophysics Data System (ADS)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  17. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  18. A new pad-based neutron detector for stereo coded aperture thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.

    2014-09-01

    A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.

  19. 3-D localization of gamma ray sources with coded apertures for medical applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.

    2015-09-01

    Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.

  20. SU-E-J-20: Adaptive Aperture Morphing for Online Correction for Prostate Cancer Radiotherapy

    SciTech Connect

    Sandhu, R; Qin, A; Yan, D

    2014-06-01

    Purpose: Online adaptive aperture morphing is desirable over translational couch shifts to accommodate not only the target position variation but also anatomic changes (rotation, deformation, and relation of target to organ-atrisks). We proposed quick and reliable method for adapting segment aperture leaves for IMRT treatment of prostate. Methods: The proposed method consists of following steps: (1) delineate the contours of prostate, SV, bladder and rectum on kV-CBCT; (2) determine prostate displacement from the rigid body registration of the contoured prostate manifested on the reference CT and the CBCT; (3) adapt the MLC segment apertures obtained from the pre-treatment IMRT planning to accommodate the shifts as well as anatomic changes. The MLC aperture adaptive algorithm involves two steps; first move the whole aperture according to prostate translational/rotational shifts, and secondly fine-tune the aperture shape to maintain the spatial relationship between the planning target contour and the MLC aperture to the daily target contour. Feasibility of this method was evaluated retrospectively on a seven-field IMRT treatment of prostate cancer patient by comparing dose volume histograms of the original plan and the aperture-adjusted plan, with/without additional segments weight optimization (SWO), on two daily treatment CBCTs selected with relative large motion and rotation. Results: For first daily treatment, the prostate rotation was significant (12degree around lateral-axis). With apertureadjusted plan, the D95 to the target was improved 25% and rectum dose (D30, D40) was reduced 20% relative to original plan on daily volumes. For second treatment-fraction, (lateral shift = 6.7mm), after adjustment target D95 improved by 3% and bladder dose (D30, maximum dose) was reduced by 1%. For both cases, extra SWO did not provide significant improvement. Conclusion: The proposed method of adapting segment apertures is promising in treatment position correction

  1. Is the critical point for aperture crossing adapted to the person-plus-object system?

    PubMed

    Hackney, Amy L; Cinelli, Michael E; Frank, Jim S

    2014-01-01

    When passing through apertures, individuals scale their actions to their shoulder width and rotate their shoulders or avoid apertures that are deemed too small for straight passage. Carrying objects wider than the body produces a person-plus-object system that individuals must account for in order to pass through apertures safely. The present study aimed to determine whether individuals scale their critical point to the widest horizontal dimension (shoulder or object width). Two responses emerged: Fast adapters adapted to the person-plus-object system by maintaining a consistent critical point regardless of whether the object was carried while slow adapters initially increased their critical point (overestimated) before adapting back to their original critical point. The results suggest that individuals can account for increases in body width by scaling actions to the size of the object width but people adapt at different rates.

  2. Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems

    SciTech Connect

    Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter

    2010-01-01

    From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions

  3. Hexagonal Uniformly Redundant Arrays (HURAs) for scintillator based coded aperture neutron imaging

    SciTech Connect

    Gamage, K.A.A.; Zhou, Q.

    2015-07-01

    A series of Monte Carlo simulations have been conducted, making use of the EJ-426 neutron scintillator detector, to investigate the potential of using hexagonal uniformly redundant arrays (HURAs) for scintillator based coded aperture neutron imaging. This type of scintillator material has a low sensitivity to gamma rays, therefore, is of particular use in a system with a source that emits both neutrons and gamma rays. The simulations used an AmBe source, neutron images have been produced using different coded-aperture materials (boron- 10, cadmium-113 and gadolinium-157) and location error has also been estimated. In each case the neutron image clearly shows the location of the source with a relatively small location error. Neutron images with high resolution can be easily used to identify and locate nuclear materials precisely in nuclear security and nuclear decommissioning applications. (authors)

  4. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-06

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.

  5. Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application

    DOEpatents

    Fenimore, E.E.

    1980-09-26

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.

  6. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Sowa, Katarzyna M; Last, Arndt; Korecki, Paweł

    2017-03-21

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10-100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy.

  7. Lensless coded-aperture imaging with separable Doubly-Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2015-02-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.

  8. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics

    NASA Astrophysics Data System (ADS)

    Sowa, Katarzyna M.; Last, Arndt; Korecki, Paweł

    2017-03-01

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10–100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy.

  9. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics

    PubMed Central

    Sowa, Katarzyna M.; Last, Arndt; Korecki, Paweł

    2017-01-01

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10–100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy. PMID:28322316

  10. Adaptable recursive binary entropy coding technique

    NASA Astrophysics Data System (ADS)

    Kiely, Aaron B.; Klimesh, Matthew A.

    2002-07-01

    We present a novel data compression technique, called recursive interleaved entropy coding, that is based on recursive interleaving of variable-to variable length binary source codes. A compression module implementing this technique has the same functionality as arithmetic coding and can be used as the engine in various data compression algorithms. The encoder compresses a bit sequence by recursively encoding groups of bits that have similar estimated statistics, ordering the output in a way that is suited to the decoder. As a result, the decoder has low complexity. The encoding process for our technique is adaptable in that each bit to be encoded has an associated probability-of-zero estimate that may depend on previously encoded bits; this adaptability allows more effective compression. Recursive interleaved entropy coding may have advantages over arithmetic coding, including most notably the admission of a simple and fast decoder. Much variation is possible in the choice of component codes and in the interleaving structure, yielding coder designs of varying complexity and compression efficiency; coder designs that achieve arbitrarily small redundancy can be produced. We discuss coder design and performance estimation methods. We present practical encoding and decoding algorithms, as well as measured performance results.

  11. The use of an active coded aperture for improved directional measurements in high energy gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Johansson, A.; Beron, B. L.; Campbell, L.; Eichler, R.; Hofstadter, R.; Hughes, E. B.; Wilson, S.; Gorodetsky, P.

    1980-01-01

    The coded aperture, a refinement of the scatter-hole camera, offers a method for the improved measurement of gamma-ray direction in gamma-ray astronomy. Two prototype coded apertures have been built and tested. The more recent of these has 128 active elements of the heavy scintillator BGO. Results of tests for gamma-rays in the range 50-500 MeV are reported and future application in space discussed.

  12. Adaptive down-sampling video coding

    NASA Astrophysics Data System (ADS)

    Wang, Ren-Jie; Chien, Ming-Chen; Chang, Pao-Chi

    2010-01-01

    Down-sampling coding, which sub-samples the image and encodes the smaller sized images, is one of the solutions to raise the image quality at insufficiently high rates. In this work, we propose an Adaptive Down-Sampling (ADS) coding for H.264/AVC. The overall system distortion can be analyzed as the sum of the down-sampling distortion and the coding distortion. The down-sampling distortion is mainly the loss of the high frequency components that is highly dependent of the spatial difference. The coding distortion can be derived from the classical Rate-Distortion theory. For a given rate and a video sequence, the optimum down-sampling resolution-ratio can be derived by utilizing the optimum theory toward minimizing the system distortion based on the models of the two distortions. This optimal resolution-ratio is used in both down-sampling and up-sampling processes in ADS coding scheme. As a result, the rate-distortion performance of ADS coding is always higher than the fixed ratio coding or H.264/AVC by 2 to 4 dB at low to medium rates.

  13. Large Coded Aperture Mask for Spaceflight Hard X-ray Images

    NASA Technical Reports Server (NTRS)

    Vigneau, Danielle N.; Robinson, David W.

    2002-01-01

    The 2.6 square meter coded aperture mask is a vital part of the Burst Alert Telescope on the Swift mission. A random, but known pattern of more than 50,000 lead tiles, each 5 mm square, was bonded to a large honeycomb panel which projects a shadow on the detector array during a gamma ray burst. A two-year development process was necessary to explore ideas, apply techniques, and finalize procedures to meet the strict requirements for the coded aperture mask. Challenges included finding a honeycomb substrate with minimal gamma ray attenuation, selecting an adhesive with adequate bond strength to hold the tiles in place but soft enough to allow the tiles to expand and contract without distorting the panel under large temperature gradients, and eliminating excess adhesive from all untiled areas. The largest challenge was to find an efficient way to bond the > 50,000 lead tiles to the panel with positional tolerances measured in microns. In order to generate the desired bondline, adhesive was applied and allowed to cure to each tile. The pre-cured tiles were located in a tool to maintain positional accuracy, wet adhesive was applied to the panel, and it was lowered to the tile surface with synchronized actuators. Using this procedure, the entire tile pattern was transferred to the large honeycomb panel in a single bond. The pressure for the bond was achieved by enclosing the entire system in a vacuum bag. Thermal vacuum and acoustic tests validated this approach. This paper discusses the methods, materials, and techniques used to fabricate this very large and unique coded aperture mask for the Swift mission.

  14. Two-layer and Adaptive Entropy Coding Algorithms for H.264-based Lossless Image Coding

    DTIC Science & Technology

    2008-04-01

    adaptive binary arithmetic coding (CABAC) [7], and context-based adaptive variable length coding (CAVLC) [3], should be adaptively adopted for advancing...Sep. 2006. [7] H. Schwarz, D. Marpe and T. Wiegand, Context-based adaptive binary arithmetic coding in the H.264/AVC video compression standard, IEEE

  15. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  16. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-06

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  17. Coded aperture x-ray diffraction imaging with transmission computed tomography side-information

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.

    2016-03-01

    Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.

  18. Design of coded aperture arrays by means of a global optimization algorithm

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2006-08-01

    Coded aperture imaging (CAI) has evolved as a standard technique for imaging high energy photon sources and has found numerous applications. Coded aperture arrays (CAAs) are the most important devices in the applications of CAI. In recent years, many approaches were presented to design optimum or near-optimum CAAs. Uniformly redundant arrays (URAs) are the most successful CAAs for their cyclic autocorrelation consisting of a sequence of delta functions on a flat sidelobe which can easily be subtracted when the object has been reconstructed. Unfortunately, the existing methods can only be used to design URAs with limited number of array sizes and fixed autocorrelative sidelobe-to-peak ratio. In this paper, we presented a method to design more flexible URAs by means of a global optimization algorithm named DIRECT. By our approaches, we obtain various types of URAs including the filled URAs which can be constructed by existing methods and the sparse URAs which never be constructed and mentioned by existing papers as far as we know.

  19. Unified Synthetic Aperture Space Time Adaptive Radar (USASTAR) Concept

    DTIC Science & Technology

    2007-11-02

    Imaging Imaging before multichannel adaptive clutter cancellationprocessing Estmaption an aer > With widely separate phase centers, Estimation and Phase...ICM spectrum PicM (v) = ’-58 (v) + I PAC (v) r+I r+1 (3.4) Clutter correlatoin P1cM (rt)= f P1,C (v)exp(j2kovzr) dv . One popular model is to employ

  20. Adaptive discrete cosine transform based image coding

    NASA Astrophysics Data System (ADS)

    Hu, Neng-Chung; Luoh, Shyan-Wen

    1996-04-01

    In this discrete cosine transform (DCT) based image coding, the DCT kernel matrix is decomposed into a product of two matrices. The first matrix is called the discrete cosine preprocessing transform (DCPT), whose kernels are plus or minus 1 or plus or minus one- half. The second matrix is the postprocessing stage treated as a correction stage that converts the DCPT to the DCT. On applying the DCPT to image coding, image blocks are processed by the DCPT, then a decision is made to determine whether the processed image blocks are inactive or active in the DCPT domain. If the processed image blocks are inactive, then the compactness of the processed image blocks is the same as that of the image blocks processed by the DCT. However, if the processed image blocks are active, a correction process is required; this is achieved by multiplying the processed image block by the postprocessing stage. As a result, this adaptive image coding achieves the same performance as the DCT image coding, and both the overall computation and the round-off error are reduced, because both the DCPT and the postprocessing stage can be implemented by distributed arithmetic or fast computation algorithms.

  1. Rate-distortion optimized adaptive transform coding

    NASA Astrophysics Data System (ADS)

    Lim, Sung-Chang; Kim, Dae-Yeon; Jeong, Seyoon; Choi, Jin Soo; Choi, Haechul; Lee, Yung-Lyul

    2009-08-01

    We propose a rate-distortion optimized transform coding method that adaptively employs either integer cosine transform that is an integer-approximated version of discrete cosine transform (DCT) or integer sine transform (IST) in a rate-distortion sense. The DCT that has been adopted in most video-coding standards is known as a suboptimal substitute for the Karhunen-Loève transform. However, according to the correlation of a signal, an alternative transform can achieve higher coding efficiency. We introduce a discrete sine transform (DST) that achieves the high-energy compactness in a correlation coefficient range of -0.5 to 0.5 and is applied to the current design of H.264/AVC (advanced video coding). Moreover, to avoid the encoder and decoder mismatch and make the implementation simple, an IST that is an integer-approximated version of the DST is developed. The experimental results show that the proposed method achieves a Bjøntegaard Delta-RATE gain up to 5.49% compared to Joint model 11.0.

  2. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  3. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  4. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  5. ProtoEXIST: advanced prototype CZT coded aperture telescopes for EXIST

    NASA Astrophysics Data System (ADS)

    Allen, Branden; Hong, Jaesub; Grindlay, Josh; Barthelmy, Scott D.; Baker, Robert G.; Gehrels, Neil A.; Garson, Trey; Krawczynski, Henric S.; Cook, Walter R.; Harrison, Fiona A.; Apple, Jeffrey A.; Ramsey, Brian D.

    2010-07-01

    ProtoEXIST1 is a pathfinder for the EXIST-HET, a coded aperture hard X-ray telescope with a 4.5 m2 CZT detector plane a 90x70 degree field of view to be flown as the primary instrument on the EXIST mission and is intended to monitor the full sky every 3 h in an effort to locate GRBs and other high energy transients. ProtoEXIST1 consists of a 256 cm2 tiled CZT detector plane containing 4096 pixels composed of an 8x8 array of individual 1.95 cm x 1.95 cm x 0.5 cm CZT detector modules each with a 8 x 8 pixilated anode configured as a coded aperture telescope with a fully coded 10° x 10° field of view employing passive side shielding and an active CsI anti-coincidence rear shield, recently completed its maiden flight out of Ft. Sumner, NM on the 9th of October 2009. During the duration of its 6 hour flight on-board calibration of the detector plane was carried out utilizing a single tagged 198.8 nCi Am-241 source along with the simultaneous measurement of the background spectrum and an observation of Cygnus X-1. Here we recount the events of the flight and report on the detector performance in a near space environment. We also briefly discuss ProtoEXIST2: the next stage of detector development which employs the NuSTAR ASIC enabling finer (32×32) anode pixilation. When completed ProtoEXIST2 will consist of a 256 cm2 tiled array and be flown simultaneously with the ProtoEXIST1 telescope.

  6. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  7. Evaluation of the cosmic-ray induced background in coded aperture high energy gamma-ray telescopes

    NASA Technical Reports Server (NTRS)

    Owens, Alan; Barbier, Loius M.; Frye, Glenn M.; Jenkins, Thomas L.

    1991-01-01

    While the application of coded-aperture techniques to high-energy gamma-ray astronomy offers potential arc-second angular resolution, concerns were raised about the level of secondary radiation produced in a thick high-z mask. A series of Monte-Carlo calculations are conducted to evaluate and quantify the cosmic-ray induced neutral particle background produced in a coded-aperture mask. It is shown that this component may be neglected, being at least a factor of 50 lower in intensity than the cosmic diffuse gamma-rays.

  8. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  9. A coded aperture compressive imaging array and its visual detection and tracking algorithms for surveillance systems.

    PubMed

    Chen, Jing; Wang, Yongtian; Wu, Hanxiao

    2012-10-29

    In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l(1) optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l(1) tracker without any optimization.

  10. Compatibility of Spatially Coded Apertures with a Miniature Mattauch-Herzog Mass Spectrograph

    NASA Astrophysics Data System (ADS)

    Russell, Zachary E.; DiDona, Shane T.; Amsden, Jason J.; Parker, Charles B.; Kibelka, Gottfried; Gehm, Michael E.; Glass, Jeffrey T.

    2016-04-01

    In order to minimize losses in signal intensity often present in mass spectrometry miniaturization efforts, we recently applied the principles of spatially coded apertures to magnetic sector mass spectrometry, thereby achieving increases in signal intensity of greater than 10× with no loss in mass resolution Chen et al. (J. Am. Soc. Mass Spectrom. 26, 1633-1640, 2015), Russell et al. (J. Am. Soc. Mass Spectrom. 26, 248-256, 2015). In this work, we simulate theoretical compatibility and demonstrate preliminary experimental compatibility of the Mattauch-Herzog mass spectrograph geometry with spatial coding. For the simulation-based theoretical assessment, COMSOL Multiphysics finite element solvers were used to simulate electric and magnetic fields, and a custom particle tracing routine was written in C# that allowed for calculations of more than 15 million particle trajectory time steps per second. Preliminary experimental results demonstrating compatibility of spatial coding with the Mattauch-Herzog geometry were obtained using a commercial miniature mass spectrograph from OI Analytical/Xylem.

  11. Fast-neutron coded-aperture imaging of special nuclear material configurations

    SciTech Connect

    P. A. Hausladen; M. A. Blackston; E. Brubaker; D. L. Chichester; P. Marleau; R. J. Newby

    2012-07-01

    In the past year, a prototype fast-neutron coded-aperture imager has been developed that has sufficient efficiency and resolution to make the counting of warheads for possible future treaty confirmation scenarios via their fission-neutron emissions practical. The imager is constructed from custom-built pixelated liquid scintillator detectors. The liquid scintillator detectors enable neutron-gamma discrimination via pulse shape, and the pixelated construction enables a sufficient number of pixels for imaging in a compact detector with a manageable number of channels of readout electronics. The imager has been used to image neutron sources at ORNL, special nuclear material (SNM) sources at the Idaho National Laboratory (INL) Zero Power Physics Reactor (ZPPR) facility, and neutron source and shielding configurations at Sandia National Laboratories. This paper reports on the design and construction of the imager, characterization measurements with neutron sources at ORNL, and measurements with SNM at the INL ZPPR facility.

  12. Development and evaluation of a portable CZT coded aperture gamma-camera

    SciTech Connect

    Montemont, G.; Monnet, O.; Stanchina, S.; Maingault, L.; Verger, L.; Carrel, F.; Lemaire, H.; Schoepff, V.; Ferrand, G.; Lalleman, A.-S.

    2015-07-01

    We present the design and the evaluation of a CdZnTe (CZT) based gamma camera using a coded aperture mask. This camera, based on a 8 cm{sup 3} detection module, is small enough to be portable and battery-powered (4 kg weight and 4 W power dissipation). As the detector has spectral capabilities, the gamma camera allows isotope identification and colored imaging, by affecting one color channel to each identified isotope. As all data processing is done at real time, the user can directly observe the outcome of an acquisition and can immediately react to what he sees. We first present the architecture of the system, how the detector works, and its performances. After, we focus on the imaging technique used and its strengths and limitations. Finally, results concerning sensitivity, spatial resolution, field of view and multi-isotope imaging are shown and discussed. (authors)

  13. The laser linewidth effect on the image quality of phase coded synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Cai, Guangyu; Hou, Peipei; Ma, Xiaoping; Sun, Jianfeng; Zhang, Ning; Li, Guangyuan; Zhang, Guo; Liu, Liren

    2015-12-01

    The phase coded (PC) waveform in synthetic aperture ladar (SAL) outperforms linear frequency modulated (LFM) signal in lower side lobe, shorter pulse duration and making the rigid control of the chirp starting point in every pulse unnecessary. Inherited from radar PC waveform and strip map SAL, the backscattered signal of a point target in PC SAL was listed and the two dimensional match filtering algorithm was introduced to focus a point image. As an inherent property of laser, linewidth is always detrimental to coherent ladar imaging. With the widely adopted laser linewidth model, the effect of laser linewidth on SAL image quality was theoretically analyzed and examined via Monte Carlo simulation. The research gives us a clear view of how to select linewidth parameters in the future PC SAL systems.

  14. Design criteria for small coded aperture masks in gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Sembay, S.; Gehrels, Neil

    1990-01-01

    Most theoretical work on coded aperture masks in X-ray and low-energy gamma-ray astronomy has concentrated on masks with large numbers of elements. For gamma-ray spectrometers in the MeV range, the detector plane usually has only a few discrete elements, so that masks with small numbers of elements are called for. For this case it is feasible to analyze by computer all the possible mask patterns of given dimension to find the ones that best satisfy the desired performance criteria. A particular set of performance criteria for comparing the flux sensitivities, source positioning accuracies and transparencies of different mask patterns is developed. The results of such a computer analysis for masks up to dimension 5 x 5 unit cell are presented and it is concluded that there is a great deal of flexibility in the choice of mask pattern for each dimension.

  15. Imaging of spatially extended hot spots with coded apertures for intra-operative nuclear medicine applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Potiriadis, C.; Karafasoulis, K.; Loukas, D.; Lambropoulos, C. P.

    2017-01-01

    Coded aperture imaging transcends planar imaging with conventional collimators in efficiency and Field of View (FOV). We present experimental results for the detection of 141 keV and 122 keV γ-photons emitted by uniformly extended 99mTc and 57Co hot-spots along with simulations of uniformly and normally extended 99mTc hot-spots. These results prove that the method can be used for intra-operative imaging of radio-traced sentinel nodes and thyroid remnants. The study is performed using a setup of two gamma cameras, each consisting of a coded-aperture (or mask) of Modified Uniformly Redundant Array (MURA) of rank 19 positioned on top of a CdTe detector. The detector pixel pitch is 350 μm and its active area is 4.4 × 4.4 cm2, while the mask element size is 1.7 mm. The detectable photon energy ranges from 15 keV up to 200 keV with an energy resolution of 3–4 keV FWHM. Triangulation is exploited to estimate the 3D spatial coordinates of the radioactive spots within the system FOV. Two extended sources, with uniform distributed activity (11 and 24 mm in diameter, respectively), positioned at 16 cm from the system and with 3 cm distance between their centers, can be resolved and localized with accuracy better than 5%. The results indicate that the estimated positions of spatially extended sources lay within their volume size and that neighboring sources, even with a low level of radioactivity, such as 30 MBq, can be clearly distinguished with an acquisition time about 3 seconds.

  16. Small-Aperture Monovision and the Pulfrich Experience: Absence of Neural Adaptation Effects

    PubMed Central

    Plainis, Sotiris; Petratou, Dionysia; Giannakopoulou, Trisevgeni; Radhakrishnan, Hema; Pallikaris, Ioannis G.; Charman, W. Neil

    2013-01-01

    Purpose To explore whether adaptation reduces the interocular visual latency differences and the induced Pulfrich effect caused by the anisocoria implicit in small-aperture monovision. Methods Anisocoric vision was simulated in two adults by wearing in the non-dominant eye for 7 successive days, while awake, an opaque soft contact lens (CL) with a small, central, circular aperture. This was repeated with aperture diameters of 1.5 and 2.5 mm. Each day, monocular and binocular pattern-reversal Visual Evoked Potentials (VEP) were recorded. Additionally, the Pulfrich effect was measured: the task of the subject was to state whether a a 2-deg spot appeared in front or behind the plane of a central cross when moved left-to-right or right-to-left on a display screen. The retinal illuminance of the dominant eye was varied using neutral density (ND) filters to establish the ND value which eliminated the Pulfrich effect for each lens. All experiments were performed at luminance levels of 5 and 30 cd/m2. Results Interocular differences in monocular VEP latency (at 30 cd/m2) rose to about 12–15 ms and 20–25 ms when the CL aperture was 2.5 and 1.5 mm, respectively. The effect was more pronounced at 5 cd/m2 (i.e. with larger natural pupils). A strong Pulfrich effect was observed under all conditions, with the effect being less striking for the 2.5 mm aperture. No neural adaptation appeared to occur: neither the interocular differences in VEP latency nor the ND value required to null the Pulfrich effect reduced over each 7-day period of anisocoric vision. Conclusions Small-aperture monovision produced marked interocular differences in visual latency and a Pulfrich experience. These were not reduced by adaptation, perhaps because the natural pupil diameter of the dominant eye was continually changing throughout the day due to varying illumination and other factors, making adaptation difficult. PMID:24155881

  17. Requirements for imaging vulnerable plaque in the coronary artery using a coded aperture imaging system

    NASA Astrophysics Data System (ADS)

    Tozian, Cynthia

    A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved

  18. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  19. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    SciTech Connect

    Joshi, S; Kaye, W; Jaworski, J; He, Z

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinhole camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for

  20. General adaptive-neighborhood technique for improving synthetic aperture radar interferometric coherence estimation.

    PubMed

    Vasile, Gabriel; Trouvé, Emmanuel; Ciuc, Mihai; Buzuloiu, Vasile

    2004-08-01

    A new method for filtering the coherence map issued from synthetic aperture radar (SAR) interferometric data is presented. For each pixel of the interferogram, an adaptive neighborhood is determined by a region-growing technique driven by the information provided by the amplitude images. Then pixels in the derived adaptive neighborhood are complex averaged to yield the filtered value of the coherence, after a phase-compensation step is performed. An extension of the algorithm is proposed for polarimetric interferometric SAR images. The proposed method has been applied to both European Remote Sensing (ERS) satellite SAR images and airborne high-resolution polarimetric interferometric SAR images. Both subjective and objective performance analysis, including coherence edge detection, shows that the proposed method provides better results than the standard phase-compensated fixed multilook filter and the Lee adaptive coherence filter.

  1. Broadband chirality-coded meta-aperture for photon-spin resolving

    PubMed Central

    Du, Luping; Kou, Shan Shan; Balaur, Eugeniu; Cadusch, Jasper J.; Roberts, Ann; Abbey, Brian; Yuan, Xiao-Cong; Tang, Dingyuan; Lin, Jiao

    2015-01-01

    The behaviour of light transmitted through an individual subwavelength aperture becomes counterintuitive in the presence of surrounding ‘decoration', a phenomenon known as the extraordinary optical transmission. Despite being polarization-sensitive, such an individual nano-aperture, however, often cannot differentiate between the two distinct spin-states of photons because of the loss of photon information on light-aperture interaction. This creates a ‘blind-spot' for the aperture with respect to the helicity of chiral light. Here we report the development of a subwavelength aperture embedded with metasurfaces dubbed a ‘meta-aperture', which breaks this spin degeneracy. By exploiting the phase-shaping capabilities of metasurfaces, we are able to create specific meta-apertures in which the pair of circularly polarized light spin-states produces opposite transmission spectra over a broad spectral range. The concept incorporating metasurfaces with nano-apertures provides a venue for exploring new physics on spin-aperture interaction and potentially has a broad range of applications in spin-optoelectronics and chiral sensing. PMID:26628047

  2. Wavefront phase retrieval with multi-aperture Zernike filter for atmospheric sensing and adaptive optics applications

    NASA Astrophysics Data System (ADS)

    Bordbar, Behzad; Farwell, Nathan H.; Vorontsov, Mikhail A.

    2016-09-01

    A novel scintillation resistant wavefront sensor based on a densely packed array of classical Zernike filters, referred to as the multi-aperture Zernike wavefront sensor (MAZ-WFS), is introduced and analyzed through numerical simulations. Wavefront phase reconstruction in the MAZ-WFS is performed using iterative algorithms that are optimized for phase aberration sensing in severe atmospheric turbulence conditions. The results demonstrate the potential of the MAZ-WFS for high-resolution retrieval of turbulence-induced phase aberrations in strong scintillation conditions for atmospheric sensing and adaptive optics applications.

  3. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered.

  4. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    NASA Astrophysics Data System (ADS)

    Alexander, J. P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M. P.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.

    2014-12-01

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e+ and e- beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10 - 100 μm on a turn-by-turn, bunch-by-bunch basis at e± beam energies of ~ 2 - 5 GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances.

  5. Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.

    PubMed

    Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J

    2014-06-02

    We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made.

  6. Adaptive-neighborhood speckle removal in multitemporal synthetic aperture radar images.

    PubMed

    Ciuc, M; Bolon, P; Trouve, E; Buzuloiu, V; Rudant, J P

    2001-11-10

    We present a new method for multitemporal synthetic aperture radar image filtering using three-dimensional (3D) adaptive neighborhoods. The method takes both spatial and temporal information into account to derive the speckle-free value of a pixel. For each pixel individually, a 3D adaptive neighborhood is determined that contains only pixels belonging to the same distribution as the current pixel. Then statistics computed inside the established neighborhood are used to derive the filter output. It is shown that the method provides good results by drastically reducing speckle over homogeneous areas while retaining edges and thin structures. The performances of the proposed method are compared in terms of subjective and objective measures with those given by several classical speckle-filtering methods.

  7. Target-adaptive polarimetric synthetic aperture radar target discrimination using maximum average correlation height filters.

    PubMed

    Sadjadi, Firooz A; Mahalanobis, Abhijit

    2006-05-01

    We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.

  8. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  9. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  10. Self characterization of a coded aperture array for neutron source imaging.

    PubMed

    Volegov, P L; Danly, C R; Fittinghoff, D N; Guler, N; Merrill, F E; Wilde, C H

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  11. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  12. Self characterization of a coded aperture array for neutron source imaging

    SciTech Connect

    Volegov, P. L. Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H.; Fittinghoff, D. N.

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  13. GAMER: GPU-accelerated Adaptive MEsh Refinement code

    NASA Astrophysics Data System (ADS)

    Schive, Hsi-Yu; Tsai, Yu-Chih; Chiueh, Tzihong

    2016-12-01

    GAMER (GPU-accelerated Adaptive MEsh Refinement) serves as a general-purpose adaptive mesh refinement + GPU framework and solves hydrodynamics with self-gravity. The code supports adaptive mesh refinement (AMR), hydrodynamics with self-gravity, and a variety of GPU-accelerated hydrodynamic and Poisson solvers. It also supports hybrid OpenMP/MPI/GPU parallelization, concurrent CPU/GPU execution for performance optimization, and Hilbert space-filling curve for load balance. Although the code is designed for simulating galaxy formation, it can be easily modified to solve a variety of applications with different governing equations. All optimization strategies implemented in the code can be inherited straightforwardly.

  14. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  15. Adaptive face coding and discrimination around the average face.

    PubMed

    Rhodes, Gillian; Maloney, Laurence T; Turner, Jenny; Ewing, Louise

    2007-03-01

    Adaptation paradigms highlight the dynamic nature of face coding and suggest that identity is coded relative to an average face that is tuned by experience. In low-level vision, adaptive coding can enhance sensitivity to differences around the adapted level. We investigated whether sensitivity to differences around the average face is similarly enhanced. Converging evidence from three paradigms showed no enhancement. Discrimination of small interocular spacing differences was not better for faces close to the average (Study 1). Nor was perceived similarity reduced for face pairs close to (spanning) the average (Study 2). On the contrary, these pairs were judged most similar. Maximum likelihood perceptual difference scaling (Studies 3 and 4) confirmed that sensitivity to differences was reduced, not enhanced, around the average. We conclude that adaptive face coding does not enhance discrimination around the average face.

  16. Performance analysis of adaptive fiber laser array propagating in atmosphere with correction of high order aberrations in sub-aperture

    NASA Astrophysics Data System (ADS)

    Li, Feng; Geng, Chao; Li, Xinyang; Qiu, Qi

    2016-10-01

    Recently developed adaptive fiber laser array technique provides a promising way incorporating aberrations correction with laser beams transmission. Existing researches are focused on sub-aperture low order aberrations (pistons and tips/tilts) compensation and got excellent correction results for weak and moderate turbulence in short range. While such results are not adequate for future laser applications which face longer range and stronger turbulence. So sub-aperture high aberrations compensation is necessary. Relationship between corrigible orders of sub-aperture aberrations and far-field metrics as power-in-the-bucket (PIB) and Strehl ratio is investigated with numeric simulation in this paper. Numerical investigation results shows that increment in array number won't result in effective improvement of the far-field metric if sub-aperture size is fixed. Low order aberrations compensation in sub-apertures gets its best performances only when turbulence strength is weak. Pistons compensation becomes invalid and higher order aberrations compensation is necessary when turbulence gets strong enough. Cost functions of the adaptive fiber laser array with high order aberrations correction in sub-apertures are defined and the optimum corrigible orders are discussed. Results shows that high order (less than first ten Zernike orders) compensation is acceptable where balance between increment of the far-field metric and the cost and complexity of the system could be reached.

  17. Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.

    PubMed

    Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen

    2016-04-20

    The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the Quantization Parameter Cascading (QPC) scheme that assigns Quantization Parameters (Qps) to different hierarchical layers in order to further improve the Rate-Distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model (HM), which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to Group-Of-Picture (GOP)- level, frame-level and Coding Unit (CU)-level Qp assignments. Comprehensive experiments have demonstrated the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.

  18. Adaptive Quantization Parameter Cascading in HEVC Hierarchical Coding.

    PubMed

    Zhao, Tiesong; Wang, Zhou; Chen, Chang Wen

    2016-07-01

    The state-of-the-art High Efficiency Video Coding (HEVC) standard adopts a hierarchical coding structure to improve its coding efficiency. This allows for the quantization parameter cascading (QPC) scheme that assigns quantization parameters (Qps) to different hierarchical layers in order to further improve the rate-distortion (RD) performance. However, only static QPC schemes have been suggested in HEVC test model, which are unable to fully explore the potentials of QPC. In this paper, we propose an adaptive QPC scheme for an HEVC hierarchical structure to code natural video sequences characterized by diversified textures, motions, and encoder configurations. We formulate the adaptive QPC scheme as a non-linear programming problem and solve it in a scientifically sound way with a manageable low computational overhead. The proposed model addresses a generic Qp assignment problem of video coding. Therefore, it also applies to group-of-picture-level, frame-level and coding unit-level Qp assignments. Comprehensive experiments have demonstrated that the proposed QPC scheme is able to adapt quickly to different video contents and coding configurations while achieving noticeable RD performance enhancement over all static and adaptive QPC schemes under comparison as well as HEVC default frame-level rate control. We have also made valuable observations on the distributions of adaptive QPC sets in the videos of different types of contents, which provide useful insights on how to further improve static QPC schemes.

  19. BAT Slew Survey (BATSS): Slew Data Analysis for the Swift-BAT Coded Aperture Imaging Telescope

    NASA Astrophysics Data System (ADS)

    Copete, Antonio Julio

    The BAT Slew Survey (BATSS) is the first wide-field survey of the hard X-ray sky (15--150 keV) with a slewing coded aperture imaging telescope. Its fine time resolution, high sensitivity and large sky coverage make it particularly well-suited for detections of transient sources with variability timescales in the ˜1 sec--1 hour range, such as Gamma-Ray Bursts (GRBs), flaring stars and Blazars. As implemented, BATSS observations are found to be consistently more sensitive than their BAT pointing-mode counterparts, by an average of 20% over the 10 sec--3 ksec exposure range, due to intrinsic systematic differences between them. The survey's motivation, development and implementation are presented, including a description of the software and hardware infrastructure that made this effort possible. The analysis of BATSS science data concentrates on the results of the 4.8-year BATSS GRB survey, beginning with the discovery of GRB 070326 during its preliminary testing phase. A total of nineteen (19) GRBs were detected exclusively in BATSS slews over this period, making it the largest contribution to the Swift GRB catalog from all ground-based analysis. The timing and spectral properties of prompt emission from BATSS GRBs reveal their consistency with Swift long GRBs (L-GRBs), though with instances of GRBs with unusually soft spectra or X-Ray Flashes (XRFs), GRBs near the faint end of the fluence distribution accessible to Swift -BAT, and a probable short GRB with extended emission, all uncommon traits within the general Swift GRB population. In addition, the BATSS overall detection rate of 0.49 GRBs/day of instrument time is a significant increase (45%) above the BAT pointing detection rate. This result was confirmed by a GRB detection simulation model, which further showed the increased sky coverage of slews to be the dominant effect in enhancing GRB detection probabilities. A review of lessons learned is included, with specific proposals to broaden both the number and

  20. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  1. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  2. Weighted adaptively grouped multilevel space time trellis codes

    NASA Astrophysics Data System (ADS)

    Jain, Dharmvir; Sharma, Sanjay

    2015-05-01

    In existing grouped multilevel space-time trellis codes (GMLSTTCs), the groups of transmit antennas are predefined, and the transmit power is equally distributed across all transmit antennas. When the channel parameters are perfectly known at the transmitter, adaptive antenna grouping and beamforming scheme can achieve the better performance by optimum grouping of transmit antennas and properly weighting transmitted signals based on the available channel information. In this paper, we present a new code designed by combining GMLSTTCs, adaptive antenna grouping and beamforming using the channel state information at transmitter (CSIT), henceforth referred to as weighted adaptively grouped multilevel space time trellis codes (WAGMLSTTCs). The CSIT is used to adaptively group the transmitting antennas and provide a beamforming scheme by allocating the different powers to the transmit antennas. Simulation results show that WAGMLSTTCs provide improvement in error performance of 2.6 dB over GMLSTTCs.

  3. A CLOSE COMPANION SEARCH AROUND L DWARFS USING APERTURE MASKING INTERFEROMETRY AND PALOMAR LASER GUIDE STAR ADAPTIVE OPTICS

    SciTech Connect

    Bernat, David; Bouchez, Antonin H.; Cromer, John L.; Dekany, Richard G.; Moore, Anna M.; Ireland, Michael; Tuthill, Peter; Martinache, Frantz; Angione, John; Burruss, Rick S.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; Kibblewhite, Edward; McKenna, Daniel L.; Petrie, Harold L.; Roberts, Jennifer; Shelton, J. Chris; Thicksten, Robert P.; Trinh, Thang

    2010-06-01

    We present a close companion search around 16 known early L dwarfs using aperture masking interferometry with Palomar laser guide star adaptive optics (LGS AO). The use of aperture masking allows the detection of close binaries, corresponding to projected physical separations of 0.6-10.0 AU for the targets of our survey. This survey achieved median contrast limits of {Delta}K {approx} 2.3 for separations between 1.2 {lambda}/D-4{lambda}/D and {Delta}K {approx} 1.4 at 2/3 {lambda}/D. We present four candidate binaries detected with moderate-to-high confidence (90%-98%). Two have projected physical separations less than 1.5 AU. This may indicate that tight-separation binaries contribute more significantly to the binary fraction than currently assumed, consistent with spectroscopic and photometric overluminosity studies. Ten targets of this survey have previously been observed with the Hubble Space Telescope as part of companion searches. We use the increased resolution of aperture masking to search for close or dim companions that would be obscured by full aperture imaging, finding two candidate binaries. This survey is the first application of aperture masking with LGS AO at Palomar. Several new techniques for the analysis of aperture masking data in the low signal-to-noise regime are explored.

  4. Exo-planet Direct Imaging with On-Axis and/or Segmented Apertures in Space: Adaptive Compensation of Aperture Discontinuities

    NASA Astrophysics Data System (ADS)

    Soummer, Remi

    Capitalizing on a recent breakthrough in wavefront control theory for obscured apertures made by our group, we propose to demonstrate a method to achieve high contrast exoplanet imaging with on-axis obscured apertures. Our new algorithm, which we named Adaptive Compensation of Aperture Discontinuities (ACAD), provides the ability to compensate for aperture discontinuities (segment gaps and/or secondary mirror supports) by controlling deformable mirrors in a nonlinear wavefront control regime not utilized before but conceptually similar to the beam reshaping used in PIAA coronagraphy. We propose here an in-air demonstration at 1E- 7 contrast, enabled by adding a second deformable mirror to our current test-bed. This expansion of the scope of our current efforts in exoplanet imaging technologies will enabling us to demonstrate an integrated solution for wavefront control and starlight suppression on complex aperture geometries. It is directly applicable at scales from moderate-cost exoplanet probe missions to the 2.4 m AFTA telescopes to future flagship UVOIR observatories with apertures potentially 16-20 m. Searching for nearby habitable worlds with direct imaging is one of the top scientific priorities established by the Astro2010 Decadal Survey. Achieving this ambitious goal will require 1e-10 contrast on a telescope large enough to provide angular resolution and sensitivity to planets around a significant sample of nearby stars. Such a mission must of course also be realized at an achievable cost. Lightweight segmented mirror technology allows larger diameter optics to fit in any given launch vehicle as compared to monolithic mirrors, and lowers total life-cycle costs from construction through integration & test, making it a compelling option for future large space telescopes. At smaller scales, on-axis designs with secondary obscurations and supports are less challenging to fabricate and thus more affordable than the off-axis unobscured primary mirror designs

  5. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  6. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  7. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  8. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  9. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  10. Adaptive feature extraction using sparse coding for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Haining; Liu, Chengliang; Huang, Yixiang

    2011-02-01

    In the signal processing domain, there has been growing interest in sparse coding with a learned dictionary instead of a predefined one, which is advocated as an effective mathematical description for the underlying principle of mammalian sensory systems in processing information. In this paper, sparse coding is introduced as a feature extraction technique for machinery fault diagnosis and an adaptive feature extraction scheme is proposed based on it. The two core problems of sparse coding, i.e., dictionary learning and coefficients solving, are discussed in detail. A natural extension of sparse coding, shift-invariant sparse coding, is also introduced. Then, the vibration signals of rolling element bearings are taken as the target signals to verify the proposed scheme, and shift-invariant sparse coding is used for vibration analysis. With the purpose of diagnosing the different fault conditions of bearings, features are extracted following the proposed scheme: basis functions are separately learned from each class of vibration signals trying to capture the defective impulses; a redundant dictionary is built by merging all the learned basis functions; based on the redundant dictionary, the diagnostic information is made explicit in the solved sparse representations of vibration signals; sparse features are formulated in terms of activations of atoms. The multiclass linear discriminant analysis (LDA) classifier is used to test the discriminability of the extracted sparse features and the adaptability of the learned atoms. The experiments show that sparse coding is an effective feature extraction technique for machinery fault diagnosis.

  11. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  12. CAMERA: a compact, automated, laser adaptive optics system for small aperture telescopes

    NASA Astrophysics Data System (ADS)

    Britton, Matthew; Velur, Viswa; Law, Nick; Choi, Philip; Penprase, Bryan E.

    2008-07-01

    CAMERA is an autonomous laser guide star adaptive optics system designed for small aperture telescopes. This system is intended to be mounted permanently on such a telescope to provide large amounts of flexibly scheduled observing time, delivering high angular resolution imagery in the visible and near infrared. The design employs a Shack Hartmann wavefront sensor, a 12x12 actuator MEMS device for high order wavefront compensation, and a solid state 355nm ND:YAG laser to generate a guide star. Commercial CCD and InGaAs detectors provide coverage in the visible and near infrared. CAMERA operates by selecting targets from a queue populated by users and executing these observations autonomously. This robotic system is targeted towards applications that are diffcult to address using classical observing strategies: surveys of very large target lists, recurrently scheduled observations, and rapid response followup of transient objects. This system has been designed and costed, and a lab testbed has been developed to evaluate key components and validate autonomous operations.

  13. Towards a Network of Small Aperture Telescopes with Adaptive Optics Correction Capability

    NASA Astrophysics Data System (ADS)

    Cegarra Polo, M.; Lambert, A.

    2016-09-01

    A low cost and compact Adaptive Optics (AO) system for a small aperture telescope (Meade LX200ACF 16") has been developed at UNSW Canberra, where its performance is currently being evaluated. It is based on COTS components, with the exception of a real time control loop implemented in a Field Programmable Gate Array (FPGA), populated in a small form factor board which also includes the wavefront image sensor. A Graphical User Interface (GUI) running in an external computer connected to the FPGA imaging board provides the operator with control of different parameters of the AO system; results registration; and log of gradients, Zernike coefficients and deformable mirror voltages for later troubleshooting. The U.S. Air Force Academy Falcon Telescope Network (USAFA FTN) is an international network of moderate aperture telescopes (20 inches) that provides raw imagery to FTN partners [1]. The FTN supports general purpose use, including astronomy, satellite imaging and STEM (Science, Technology, Engineering and Mathematics) support. Currently 5 nodes are in operation, operated on-site or remotely, and more are to be commissioned over the next few years. One of the network nodes is located at UNSW Canberra (Australia), where the ground-based space surveillance team is currently using it for research in different areas of Space Situational Awareness (SSA). Some current and future SSA goals include geostationary satellite characterization through imaging modalities like polarimetry and real time image processing of Low Earth Orbit (LEO) objects. The fact that all FTN nodes have the same configuration facilitates the collaborative work between international teams of different nodes, so improvements and lessons learned at one site can be extended to the rest of nodes. With respect to this, preliminary studies of the imagery improvement that would be achieved with the AO system developed at UNSW, installed on a second 16 inch Meade LX200ACF telescope and compared to the

  14. Peripheral adaptation codes for high odor concentration in glomeruli.

    PubMed

    Lecoq, Jérôme; Tiret, Pascale; Charpak, Serge

    2009-03-11

    Adaptation is a general property of sensory receptor neurons and has been extensively studied in isolated cell preparation of olfactory receptor neurons. In contrast, little is known about the conditions under which peripheral adaptation occurs in the CNS during odorant stimulation. Here, we used two-photon laser-scanning microscopy and targeted extracellular recording in freely breathing anesthetized rats to investigate the correlate of peripheral adaptation at the first synapse of the olfactory pathway in olfactory bulb glomeruli. We find that during sustained stimulation at high concentration, odorants can evoke local field potential (LFP) postsynaptic responses that rapidly adapt with time, some within two inhalations. Simultaneous measurements of LFP and calcium influx at olfactory receptor neuron terminals reveal that postsynaptic adaptation is associated with a decrease in odorant-evoked calcium response, suggesting that it results from a decrease in glutamate release. This glomerular adaptation was concentration-dependent and did not change the glomerular input-output curve. In addition, in situ application of antagonists of either ionotropic glutamate receptors or metabotropic GABA(B) receptors did not affect this adaptation, thus discarding the involvement of local presynaptic inhibition. Glomerular adaptation, therefore, reflects the response decline of olfactory receptor neurons to sustained odorant. We postulate that peripheral fast adaptation is a means by which glomerular output codes for high concentration of odor.

  15. Adaptive EZW coding using a rate-distortion criterion

    NASA Astrophysics Data System (ADS)

    Yin, Che-Yi

    2001-07-01

    This work presents a new method that improves on the EZW image coding algorithm. The standard EZW image coder uses a uniform quantizer with a threshold (deadzone) that is identical in all subbands. The quantization step sizes are not optimized under the rate-distortion sense. We modify the EZW by applying the Lagrange multiplier to search for the best step size for each subband and allocate the bit rate for each subband accordingly. Then we implement the adaptive EZW codec to code the wavelet coefficients. Two coding environments, independent and dependent, are considered for the optimization process. The proposed image coder retains all the good features of the EZW, namely, embedded coding, progressive transmission, order of the important bits, and enhances it through the rate-distortion optimization with respect to the step sizes.

  16. Link-Adaptive Distributed Coding for Multisource Cooperation

    NASA Astrophysics Data System (ADS)

    Cano, Alfonso; Wang, Tairan; Ribeiro, Alejandro; Giannakis, Georgios B.

    2007-12-01

    Combining multisource cooperation and link-adaptive regenerative techniques, a novel protocol is developed capable of achieving diversity order up to the number of cooperating users and large coding gains. The approach relies on a two-phase protocol. In Phase 1, cooperating sources exchange information-bearing blocks, while in Phase 2, they transmit reencoded versions of the original blocks. Different from existing approaches, participation in the second phase does not require correct decoding of Phase 1 packets. This allows relaying of soft information to the destination, thus increasing coding gains while retaining diversity properties. For any reencoding function the diversity order is expressed as a function of the rank properties of the distributed coding strategy employed. This result is analogous to the diversity properties of colocated multi-antenna systems. Particular cases include repetition coding, distributed complex field coding (DCFC), distributed space-time coding, and distributed error-control coding. Rate, diversity, complexity and synchronization issues are elaborated. DCFC emerges as an attractive choice because it offers high-rate, full spatial diversity, and relaxed synchronization requirements. Simulations confirm analytically established assessments.

  17. A Mechanically-Cooled, Highly-Portable, HPGe-Based, Coded-Aperture Gamma-Ray Imager

    SciTech Connect

    Ziock, Klaus-Peter; Boehnen, Chris Bensing; Hayward, Jason P; Raffo-Caiado, Ana Claudia

    2010-01-01

    Coded-aperture gamma-ray imaging is a mature technology that is capable of providing accurate and quantitative images of nuclear materials. Although it is potentially of high value to the safeguards and arms-control communities, it has yet to be fully embraced by those communities. One reason for this is the limited choice, high-cost, and low efficiency of commercial instruments; while instruments made by research organizations are frequently large and / or unsuitable for field work. In this paper we present the results of a project that mates the coded-aperture imaging approach with the latest in commercially-available, position-sensitive, High Purity Germanium (HPGe) detec-tors. The instrument replaces a laboratory prototype that, was unsuitable for other than demonstra-tions. The original instrument, and the cart on which it is mounted to provide mobility and pointing capabilities, has a footprint of ~ 2/3 m x 2 m, weighs ~ 100 Kg, and requires cryogen refills every few days. In contrast, the new instrument is tripod mounted, weighs of order 25 Kg, operates with a laptop computer, and is mechanically cooled. The instrument is being used in a program that is ex-ploring the use of combined radiation and laser scanner imaging. The former provides information on the presence, location, and type of nuclear materials while the latter provides design verification information. To align the gamma-ray images with the laser scanner data, the Ge imager is fitted and aligned to a visible-light stereo imaging unit. This unit generates a locus of 3D points that can be matched to the precise laser scanner data. With this approach, the two instruments can be used completely independently at a facility, and yet the data can be accurately overlaid based on the very structures that are being measured.

  18. Adaptive directional lifting-based wavelet transform for image coding.

    PubMed

    Ding, Wenpeng; Wu, Feng; Wu, Xiaolin; Li, Shipeng; Li, Houqiang

    2007-02-01

    We present a novel 2-D wavelet transform scheme of adaptive directional lifting (ADL) in image coding. Instead of alternately applying horizontal and vertical lifting, as in present practice, ADL performs lifting-based prediction in local windows in the direction of high pixel correlation. Hence, it adapts far better to the image orientation features in local windows. The ADL transform is achieved by existing 1-D wavelets and is seamlessly integrated into the global wavelet transform. The predicting and updating signals of ADL can be derived even at the fractional pixel precision level to achieve high directional resolution, while still maintaining perfect reconstruction. To enhance the ADL performance, a rate-distortion optimized directional segmentation scheme is also proposed to form and code a hierarchical image partition adapting to local features. Experimental results show that the proposed ADL-based image coding technique outperforms JPEG 2000 in both PSNR and visual quality, with the improvement up to 2.0 dB on images with rich orientation features.

  19. Motion-compensated wavelet video coding using adaptive mode selection

    NASA Astrophysics Data System (ADS)

    Zhai, Fan; Pappas, Thrasyvoulos N.

    2004-01-01

    A motion-compensated wavelet video coder is presented that uses adaptive mode selection (AMS) for each macroblock (MB). The block-based motion estimation is performed in the spatial domain, and an embedded zerotree wavelet coder (EZW) is employed to encode the residue frame. In contrast to other motion-compensated wavelet video coders, where all the MBs are forced to be in INTER mode, we construct the residue frame by combining the prediction residual of the INTER MBs with the coding residual of the INTRA and INTER_ENCODE MBs. Different from INTER MBs that are not coded, the INTRA and INTER_ENCODE MBs are encoded separately by a DCT coder. By adaptively selecting the quantizers of the INTRA and INTER_ENCODE coded MBs, our goal is to equalize the characteristics of the residue frame in order to improve the overall coding efficiency of the wavelet coder. The mode selection is based on the variance of the MB, the variance of the prediction error, and the variance of the neighboring MBs' residual. Simulations show that the proposed motion-compensated wavelet video coder achieves a gain of around 0.7-0.8dB PSNR over MPEG-2 TM5, and a comparable PSNR to other 2D motion-compensated wavelet-based video codecs. It also provides potential visual quality improvement.

  20. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-resolution Coded-aperture Telescopes

    NASA Astrophysics Data System (ADS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott

    2017-01-01

    Wide-field (≳100 deg{}2) hard X-ray coded-aperture telescopes with high angular resolution (≲2‧) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  1. Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways

    PubMed Central

    Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.

    2013-01-01

    Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101

  2. Coded aperture imaging of fusion source in a plasma focus operated with pure D{sub 2} and a D{sub 2}-Kr gas admixture

    SciTech Connect

    Springham, S. V.; Talebitaher, A.; Shutler, P. M. E.; Rawat, R. S.; Lee, P.; Lee, S.

    2012-09-10

    The coded aperture imaging (CAI) technique has been used to investigate the spatial distribution of DD fusion in a 1.6 kJ plasma focus (PF) device operated in, alternatively, pure deuterium or deuterium-krypton admixture. The coded mask pattern is based on a singer cyclic difference set with 25% open fraction and positioned close to 90 Degree-Sign to the plasma focus axis, with CR-39 detectors used to register tracks of protons from the D(d, p)T reaction. Comparing the coded aperture imaging proton images for pure D{sub 2} and D{sub 2}-Kr admixture operation reveals clear differences in size, density, and shape between the fusion sources for these two cases.

  3. DM/LCWFC based adaptive optics system for large aperture telescopes imaging from visible to infrared waveband.

    PubMed

    Sun, Fei; Cao, Zhaoliang; Wang, Yukun; Zhang, Caihua; Zhang, Xingyun; Liu, Yong; Mu, Quanquan; Xuan, Li

    2016-11-28

    Almost all the deformable mirror (DM) based adaptive optics systems (AOSs) used on large aperture telescopes work at the infrared waveband due to the limitation of the number of actuators. To extend the imaging waveband to the visible, we propose a DM and Liquid crystal wavefront corrector (DM/LCWFC) combination AOS. The LCWFC is used to correct the high frequency aberration corresponding to the visible waveband and the aberrations of the infrared are corrected by the DM. The calculated results show that, to a 10 m telescope, DM/LCWFC AOS which contains a 1538 actuators DM and a 404 × 404 pixels LCWFC is equivalent to a DM based AOS with 4057 actuators. It indicates that the DM/LCWFC AOS is possible to work from visible to infrared for larger aperture telescopes. The simulations and laboratory experiment are performed for a 2 m telescope. The experimental results show that, after correction, near diffraction limited resolution USAF target images are obtained at the wavebands of 0.7-0.9 μm, 0.9-1.5 μm and 1.5-1.7 μm respectively. Therefore, the DM/LCWFC AOS may be used to extend imaging waveband of larger aperture telescope to the visible. It is very appropriate for the observation of spatial objects and the scientific research in astronomy.

  4. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  5. Adaptive neural coding: from biological to behavioral decision-making

    PubMed Central

    Louie, Kenway; Glimcher, Paul W.; Webb, Ryan

    2015-01-01

    Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666

  6. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  7. Adaptive zero-tree structure for curved wavelet image coding

    NASA Astrophysics Data System (ADS)

    Zhang, Liang; Wang, Demin; Vincent, André

    2006-02-01

    We investigate the issue of efficient data organization and representation of the curved wavelet coefficients [curved wavelet transform (WT)]. We present an adaptive zero-tree structure that exploits the cross-subband similarity of the curved wavelet transform. In the embedded zero-tree wavelet (EZW) and the set partitioning in hierarchical trees (SPIHT), the parent-child relationship is defined in such a way that a parent has four children, restricted to a square of 2×2 pixels, the parent-child relationship in the adaptive zero-tree structure varies according to the curves along which the curved WT is performed. Five child patterns were determined based on different combinations of curve orientation. A new image coder was then developed based on this adaptive zero-tree structure and the set-partitioning technique. Experimental results using synthetic and natural images showed the effectiveness of the proposed adaptive zero-tree structure for encoding of the curved wavelet coefficients. The coding gain of the proposed coder can be up to 1.2 dB in terms of peak SNR (PSNR) compared to the SPIHT coder. Subjective evaluation shows that the proposed coder preserves lines and edges better than the SPIHT coder.

  8. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  9. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  10. Long non-coding RNAs in innate and adaptive immunity

    PubMed Central

    Aune, Thomas M.; Spurlock, Charles F.

    2015-01-01

    Long noncoding RNAs (lncRNAs) represent a newly discovered class of regulatory molecules that impact a variety of biological processes in cells and organ systems. In humans, it is estimated that there may be more than twice as many lncRNA genes than protein-coding genes. However, only a handful of lncRNAs have been analyzed in detail. In this review, we describe expression and functions of lncRNAs that have been demonstrated to impact innate and adaptive immunity. These emerging paradigms illustrate remarkably diverse mechanisms that lncRNAs utilize to impact the transcriptional programs of immune cells required to fight against pathogens and maintain normal health and homeostasis. PMID:26166759

  11. Simulating ion beam extraction from a single aperture triode acceleration column: A comparison of the beam transport codes IGUN and PBGUNS with test stand data

    SciTech Connect

    Patel, A.; Wills, J. S. C.; Diamond, W. T.

    2008-04-15

    Ion beam extraction from two different ion sources with single aperture triode extraction columns was simulated with the particle beam transport codes PBGUNS and IGUN. For each ion source, the simulation results are compared to experimental data generated on well-equipped test stands. Both codes reproduced the qualitative behavior of the extracted ion beams to incremental and scaled changes to the extraction electrode geometry observed on the test stands. Numerical values of optimum beam currents and beam emittance generated by the simulations also agree well with test stand data.

  12. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  13. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation

    PubMed Central

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2013-01-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method. PMID:23750314

  14. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  15. Modified reconstruction algorithm based on space-time adaptive processing for multichannel synthetic aperture radar systems in azimuth

    NASA Astrophysics Data System (ADS)

    Guo, Xiaojiang; Gao, Yesheng; Wang, Kaizhi; Liu, Xingzhao

    2016-07-01

    A spectrum reconstruction algorithm based on space-time adaptive processing (STAP) can effectively suppress azimuth ambiguity for multichannel synthetic aperture radar (SAR) systems in azimuth. However, the traditional STAP-based reconstruction approach has to estimate the covariance matrix and calculate matrix inversion (MI) for each Doppler frequency bin, which will result in a very large computational load. In addition, the traditional STAP-based approach has to know the exact platform velocity, pulse repetition frequency, and array configuration. Errors involving these parameters will significantly degrade the performance of ambiguity suppression. A modified STAP-based approach to solve these problems is presented. The traditional array steering vectors and corresponding covariance matrices are Doppler-variant in the range-Doppler domain. After preprocessing by a proposed phase compensation method, they would be independent of Doppler bins. Therefore, the modified STAP-based approach needs to estimate the covariance matrix and calculate MI only once. The computation load could be greatly reduced. Moreover, by combining the reconstruction method and a proposed adaptive parameter estimation method, the modified method is able to successfully achieve multichannel SAR signal reconstruction and suppress azimuth ambiguity without knowing the above parameters. Theoretical analysis and experiments showed the simplicity and efficiency of the proposed methods.

  16. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    SciTech Connect

    Morris, R; Albanese, K; Lakshmanan, M; Greenberg, J; Kapadia, A

    2015-06-15

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality for breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded

  17. Was Wright right? The canonical genetic code is an empirical example of an adaptive peak in nature; deviant genetic codes evolved using adaptive bridges.

    PubMed

    Seaborg, David M

    2010-08-01

    The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of "adaptive bridges," neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept.

  18. KAPAO: A Natural Guide Star Adaptive Optics System for Small Aperture Telescopes

    NASA Astrophysics Data System (ADS)

    Severson, Scott A.; Choi, P. I.; Spjut, E.; Contreras, D. S.; Gilbreth, B. N.; McGonigle, L. P.; Morrison, W. A.; Rudy, A. R.; Xue, A.; Baranec, C.; Riddle, R.

    2012-05-01

    We describe KAPAO, our project to develop and deploy a low-cost, remote-access, natural guide star adaptive optics system for the Pomona College Table Mountain Observatory (TMO) 1-meter telescope. The system will offer simultaneous dual-band, diffraction-limited imaging at visible and near-infrared wavelengths and will deliver an order-of-magnitude improvement in point source sensitivity and angular resolution relative to the current TMO seeing limits. We have adopted off-the-shelf core hardware components to ensure reliability, minimize costs and encourage replication efforts. These components include a MEMS deformable mirror, a Shack-Hartmann wavefront sensor and a piezo-electric tip-tilt mirror. We present: project motivation, goals and milestones; the instrument optical design; the instrument opto-mechanical design and tolerances; and an overview of KAPAO Alpha, our on-the-sky testbed using off-the-shelf optics. Beyond the expanded scientific capabilities enabled by AO-enhanced resolution and sensitivity, the interdisciplinary nature of the instrument development effort provides an exceptional opportunity to train a broad range of undergraduate STEM students in AO technologies and techniques. The breadth of our collaboration, which includes both public (Sonoma State University) and private (Pomona and Harvey Mudd Colleges) undergraduate institutions has enabled us to engage students ranging from physics, astronomy, engineering and computer science in the all stages of this project. This material is based upon work supported by the National Science Foundation under Grant No. 0960343.

  19. KAPAO-Alpha: An On-The-Sky Testbed for Adaptive Optics on Small Aperture Telescopes

    NASA Astrophysics Data System (ADS)

    Morrison, Will; Choi, P. I.; Severson, S. A.; Spjut, E.; Contreras, D. S.; Gilbreth, B. N.; McGonigle, L. P.; Rudy, A. R.; Xue, A.; Baranec, C.; Riddle, R.

    2012-05-01

    We present initial in-lab and on-sky results of a natural guide star adaptive optics instrument, KAPAO-Alpha, being deployed on Pomona College’s 1-meter telescope at Table Mountain Observatory. The instrument is an engineering prototype designed to help us identify and solve design and integration issues before building KAPAO, a low-cost, dual-band, natural guide star AO system currently in active development and scheduled for first light in 2013. The Alpha system operates at visible wavelengths, employs Shack-Hartmann wavefront sensing, and is assembled entirely from commercially available components that include: off-the-shelf optics, a 140-actuator BMC deformable mirror, a high speed SciMeasure Lil’ Joe camera, and an EMCCD for science image acquisition. Wavefront reconstruction operating at 1-kHz speeds is handled with a consumer-grade computer running custom software adopted from the Robo-AO project. The assembly and integration of the Alpha instrument has been undertaken as a Pomona College undergraduate thesis. As part of the larger KAPAO project, it is supported by the National Science Foundation under Grant No. 0960343.

  20. COASP and CHASP Processors for Strip-map and Moving Target Adaptive Processing of EC CV-580 Synthetic Aperture Radar Data: Algorithms and Software Description

    DTIC Science & Technology

    2006-05-01

    synthetic aperture radar ( SAR ) since the late 1990’s in support of target detection and classification studies. Until recently, processing of data from...this SAR system has been carried out in-house using the Polarimetrc Generalized Airborne SAR Processor (PolGASP) that was developed at the Canada...COASP (Configurable Airborne SAR Processor) and CHASP (Chip Adaptive SAR Processor) processors have been developed to replace and augment PolGASP and

  1. Studies of the chromatic properties and dynamic aperture of the BNL colliding-beam accelerator. [PATRICIA particle tracking code

    SciTech Connect

    Dell, G.F.

    1983-01-01

    The PATRICIA particle tracking program has been used to study chromatic effects in the Brookhaven CBA (Colliding Beam Accelerator). The short term behavior of particles in the CBA has been followed for particle histories of 300 turns. Contributions from magnet multipoles characteristic of superconducting magnets and closed orbit errors have been included in determining the dynamic aperture of the CBA for on and off momentum particles. The width of the third integer stopband produced by the temperature dependence of magnetization induced sextupoles in the CBA cable dipoles is evaluated for helium distribution systems having periodicity of one and six. The stopband width at a tune of 68/3 is naturally zero for the system having a periodicity of six and is approx. 10/sup -4/ for the system having a periodicity of one. Results from theory are compared with results obtained with PATRICIA; the results agree within a factor of slightly more than two.

  2. Application study of piecewise context-based adaptive binary arithmetic coding combined with modified LZC

    NASA Astrophysics Data System (ADS)

    Su, Yan; Jun, Xie Cheng

    2006-08-01

    An algorithm of combining LZC and arithmetic coding algorithm for image compression is presented and both theory deduction and simulation result prove the correctness and feasibility of the algorithm. According to the characteristic of context-based adaptive binary arithmetic coding and entropy, LZC was modified to cooperate the optimized piecewise arithmetic coding, this algorithm improved the compression ratio without any additional time consumption compared to traditional method.

  3. Combining Measurements with Three-Dimensional Laser Scanning System and Coded Aperture Gamma-Ray Imaging Systems for International Safeguards Applications

    SciTech Connect

    Boehnen, Chris Bensing; Bogard, James S; Hayward, Jason P; Raffo-Caiado, Ana Claudia; Smith, Stephen E; Ziock, Klaus-Peter

    2010-01-01

    Being able to verify the operator's declaration in regards to technical design of nuclear facilities is an important aspect of every safeguards approach. In addition to visual observation, it is relevant to know if nuclear material is present or has been present in piping and ducts not declared. The possibility of combining different measurement techniques into one tool should optimize the inspection effort and increase safeguards effectiveness. Oak Ridge National Laboratory (ORNL) is engaged in a technical collaboration project involving two U.S. Department of Energy foreign partners to investigate combining measurements from a three-dimensional (3D) laser scanning system and gamma-ray imaging systems. ORNL conducted simultaneous measurements with a coded-aperture gamma-ray imager and the 3D laser scanner in an operational facility with complex configuration and different enrichment levels and quantities of uranium. This paper describes these measurements and their results.

  4. Method for reducing background artifacts from images in single-photon emission computed tomography with a uniformly redundant array coded aperture

    NASA Astrophysics Data System (ADS)

    Vassilieva, Olga I.; Chaney, Roy C.

    2002-03-01

    Uniformly redundant array coded apertures have proven to be useful in the design of collimators for x-ray astronomy. They were initially expected to be equally successful in single-photon emission computed tomography (SPECT). Unfortunately, the SPECT images produced by this collimator contain artifacts, which mask the true picture and can lead to false diagnosis. Monte Carlo simulation has shown that the formation of a composite image will significantly reduce these artifacts. A simulation of a tumor in a compressed breast phantom has produced a composite image, which clearly indicates the presence of a 5 mm x 5 mm x 5 mm tumor with a 6:1 intensity ratio relative to the background tissue.

  5. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  6. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  7. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition.

  8. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  9. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  10. Adaptive λ estimation in Lagrangian rate-distortion optimization for video coding

    NASA Astrophysics Data System (ADS)

    Chen, Lulin; Garbacea, Ilie

    2006-01-01

    In this paper, adaptive Lagrangian multiplier λ estimation in Larangian R-D optimization for video coding is presented that is based on the ρ-domain linear rate model and distortion model. It yields that λ is a function of rate, distortion and coding input statistics and can be written as λ(R, D, σ2) = β(ln(σ2/D) + δ)D/R + k 0, with β, δ and k 0 as coding constants, σ2 is variance of prediction error input. λ(R, D, σ2) describes its ubiquitous relationship with coding statistics and coding input in hybrid video coding such as H.263, MPEG-2/4 and H.264/AVC. The lambda evaluation is de-coupled with quantization parameters. The proposed lambda estimation enables a fine encoder design and encoder control.

  11. Shape-adaptive discrete wavelet transform for coding arbitrarily shaped texture

    NASA Astrophysics Data System (ADS)

    Li, Shipeng; Li, Weiping

    1997-01-01

    This paper presents a shape adaptive discrete wavelet transform (SA-DWT) scheme for coding arbitrarily shaped texture. The proposed SA-DWT can be used for object-oriented image coding. The number of coefficients after SA-DWT is identical to the number of pels contained in the arbitrarily shaped image objects. The locality property of wavelet transform and self-similarity among subbands are well preserved throughout this process.For a rectangular region, the SA-DWT is identical to a standard wavelet transform. With SA-DWT, conventional wavelet based coding schemes can be readily extended to the coding of arbitrarily shaped objects. The proposed shape adaptive wavelet transform is not unitary but the small energy increase is restricted at the boundary of objects in subbands. Two approaches of using the SA-DWT algorithm for object-oriented image and video coding are presented. One is to combine scalar SA-DWT with embedded zerotree wavelet (EZW) coding technique, the other is an extension of the normal vector wavelet coding (VWC) technique to arbitrarily shaped objects. Results of applying SA-VWC to real arbitrarily shaped texture coding are also given at the end of this paper.

  12. Unsupervised learning approach to adaptive differential pulse code modulation.

    PubMed

    Griswold, N C; Sayood, K

    1982-04-01

    This research is concerned with investigating the problem of data compression utilizing an unsupervised estimation algorithm. This extends previous work utilizing a hybrid source coder which combines an orthogonal transformation with differential pulse code modulation (DPCM). The data compression is achieved in the DPCM loop, and it is the quantizer of this scheme which is approached from an unsupervised learning procedure. The distribution defining the quantizer is represented as a set of separable Laplacian mixture densities for two-dimensional images. The condition of identifiability is shown for the Laplacian case and a decision directed estimate of both the active distribution parameters and the mixing parameters are discussed in view of a Bayesian structure. The decision directed estimators, although not optimum, provide a realizable structure for estimating the parameters which define a distribution which has become active. These parameters are then used to scale the optimum (in the mean square error sense) Laplacian quantizer. The decision criteria is modified to prevent convergence to a single distribution which in effect is the default condition for a variance estimator. This investigation was applied to a test image and the resulting data demonstrate improvement over other techniques using fixed bit assignments and ideal channel conditions.

  13. Temporal Aperture Modulation

    NASA Technical Reports Server (NTRS)

    Proctor, R. J.

    1981-01-01

    The two types of modulation techniques useful to X-ray imaging are reviewed. The use of optimum coded temporal aperature modulation is shown, in certain cases, to offer an advantage over a spatial aperture modulator. Example applications of a diffuse anisotropic X-ray background experiment and a wide field of view hard X-ray imager are discussed.

  14. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  15. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  16. Combining Measurements with Three-Dimensional Laser Scanning System and Coded Aperture Gamma-Ray Imaging System for International Safeguards Applications

    SciTech Connect

    Boehnen, Chris Bensing; Bogard, James S; Hayward, Jason P; Raffo-Caiado, Ana Claudia; Smith, Steven E; Ziock, Klaus-Peter

    2010-01-01

    Being able to verify the operator's declaration in regard to the technical design of nuclear facilities is an important aspect of every safeguards approach. In addition to visual observation, it is necessary to know if nuclear material is present or has been present in undeclared piping and ducts. The possibility of combining the results from different measurement techniques into one easily interpreted product should optimize the inspection effort and increase safeguards effectiveness. A collaborative effort to investigate the possibility of combining measurements from a three-dimensional (3D) laser scanning system and gamma-ray imaging systems is under way. The feasibility of the concept has been previously proven with different laboratory prototypes of gamma-ray imaging systems. Recently, simultaneous measurements were conducted with a new highly portable, mechanically cooled, High Purity Germanium (HPGe), coded-aperture gamma-ray imager and a 3D laser scanner in an operational facility with complex configuration and different enrichment levels and quantities of uranium. With specially designed software, data from both instruments were combined and a 3D model of the facility was generated that also identified locations of radioactive sources. This paper provides an overview of the technology, describes the measurements, discusses the various safeguards scenarios addressed, and presents results of experiments.

  17. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  18. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum

    PubMed Central

    Ziauddeen, Hisham; Vestergaard, Martin D.; Spencer, Tom

    2017-01-01

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  19. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness.SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that

  20. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  1. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  2. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  3. Lossless image compression based on optimal prediction, adaptive lifting, and conditional arithmetic coding.

    PubMed

    Boulgouris, N V; Tzovaras, D; Strintzis, M G

    2001-01-01

    The optimal predictors of a lifting scheme in the general n-dimensional case are obtained and applied for the lossless compression of still images using first quincunx sampling and then simple row-column sampling. In each case, the efficiency of the linear predictors is enhanced nonlinearly. Directional postprocessing is used in the quincunx case, and adaptive-length postprocessing in the row-column case. Both methods are seen to perform well. The resulting nonlinear interpolation schemes achieve extremely efficient image decorrelation. We further investigate context modeling and adaptive arithmetic coding of wavelet coefficients in a lossless compression framework. Special attention is given to the modeling contexts and the adaptation of the arithmetic coder to the actual data. Experimental evaluation shows that the best of the resulting coders produces better results than other known algorithms for multiresolution-based lossless image coding.

  4. Sample-adaptive-prediction for HEVC SCC intra coding with ridge estimation from spatially neighboring samples

    NASA Astrophysics Data System (ADS)

    Kang, Je-Won; Ryu, Soo-Kyung

    2017-02-01

    In this paper a sample-adaptive prediction technique is proposed to yield efficient coding performance in an intracoding for screen content video coding. The sample-based prediction is to reduce spatial redundancies in neighboring samples. To this aim, the proposed technique uses a weighted linear combination of neighboring samples and applies the robust optimization technique, namely, ridge estimation to derive the weights in a decoder side. The ridge estimation uses L2 norm based regularization term, and, thus the solution is more robust to high variance samples such as in sharp edges and high color contrasts exhibited in screen content videos. It is demonstrated with the experimental results that the proposed technique provides an improved coding gain as compared to the HEVC screen content video coding reference software.

  5. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  6. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  7. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  8. Synthetic-aperture chirp confocal imaging.

    PubMed

    Chien, Wei-Chen; Dilworth, D S; Liu, Elson; Leith, E N

    2006-01-20

    An imaging system that combines synthetic-aperture imaging, holography, and an optical chirp with confocal imaging is described and analyzed. Comparisons are made with synthetic-aperture radar systems. Adaptation of several synthetic-aperture radar techniques to the optical counterparts is suggested.

  9. SPECT Imaging of 2-D and 3-D Distributed Sources with Near-Field Coded Aperture Collimation: Computer Simulation and Real Data Validation.

    PubMed

    Mu, Zhiping; Dobrucki, Lawrence W; Liu, Yi-Hwa

    The imaging of distributed sources with near-field coded aperture (CA) remains extremely challenging and is broadly considered unsuitable for single-photon emission computerized tomography (SPECT). This study proposes a novel CA SPECT reconstruction approach and evaluates the feasibilities of imaging and reconstructing distributed hot sources and cold lesions using near-field CA collimation and iterative image reconstruction. Computer simulations were designed to compare CA and pinhole collimations in two-dimensional radionuclide imaging. Digital phantoms were created and CA images of the phantoms were reconstructed using maximum likelihood expectation maximization (MLEM). Errors and the contrast-to-noise ratio (CNR) were calculated and image resolution was evaluated. An ex vivo rat heart with myocardial infarction was imaged using a micro-SPECT system equipped with a custom-made CA module and a commercial 5-pinhole collimator. Rat CA images were reconstructed via the three-dimensional (3-D) MLEM algorithm developed for CA SPECT with and without correction for a large projection angle, and 5-pinhole images were reconstructed using the commercial software provided by the SPECT system. Phantom images of CA were markedly improved in terms of image quality, quantitative root-mean-squared error, and CNR, as compared to pinhole images. CA and pinhole images yielded similar image resolution, while CA collimation resulted in fewer noise artifacts. CA and pinhole images of the rat heart were well reconstructed and the myocardial perfusion defects could be clearly discerned from 3-D CA and 5-pinhole SPECT images, whereas 5-pinhole SPECT images suffered from severe noise artifacts. Image contrast of CA SPECT was further improved after correction for the large projection angle used in the rat heart imaging. The computer simulations and small-animal imaging study presented herein indicate that the proposed 3-D CA SPECT imaging and reconstruction approaches worked reasonably

  10. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  11. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  12. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  13. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Astrophysics Data System (ADS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-11-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  14. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  15. Adapting a Navier-Stokes code to the ICL-DAP

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.

    1985-01-01

    The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.

  16. Effective wavelet-based compression method with adaptive quantization threshold and zerotree coding

    NASA Astrophysics Data System (ADS)

    Przelaskowski, Artur; Kazubek, Marian; Jamrogiewicz, Tomasz

    1997-10-01

    Efficient image compression technique especially for medical applications is presented. Dyadic wavelet decomposition by use of Antonini and Villasenor bank filters is followed by adaptive space-frequency quantization and zerotree-based entropy coding of wavelet coefficients. Threshold selection and uniform quantization is made on a base of spatial variance estimate built on the lowest frequency subband data set. Threshold value for each coefficient is evaluated as linear function of 9-order binary context. After quantization zerotree construction, pruning and arithmetic coding is applied for efficient lossless data coding. Presented compression method is less complex than the most effective EZW-based techniques but allows to achieve comparable compression efficiency. Specifically our method has similar to SPIHT efficiency in MR image compression, slightly better for CT image and significantly better in US image compression. Thus the compression efficiency of presented method is competitive with the best published algorithms in the literature across diverse classes of medical images.

  17. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  18. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  19. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  20. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  1. A optimized context-based adaptive binary arithmetic coding algorithm in progressive H.264 encoder

    NASA Astrophysics Data System (ADS)

    Xiao, Guang; Shi, Xu-li; An, Ping; Zhang, Zhao-yang; Gao, Ge; Teng, Guo-wei

    2006-05-01

    Context-based Adaptive Binary Arithmetic Coding (CABAC) is a new entropy coding method presented in H.264/AVC that is highly efficient in video coding. In the method, the probability of current symbol is estimated by using the wisely designed context model, which is adaptive and can approach to the statistic characteristic. Then an arithmetic coding mechanism largely reduces the redundancy in inter-symbol. Compared with UVLC method in the prior standard, CABAC is complicated but efficiently reduce the bit rate. Based on thorough analysis of coding and decoding methods of CABAC, This paper proposed two methods, sub-table method and stream-reuse methods, to improve the encoding efficiency implemented in H.264 JM code. In JM, the CABAC function produces bits one by one of every syntactic element. Multiplication operating times after times in the CABAC function lead to it inefficient.The proposed algorithm creates tables beforehand and then produce every bits of syntactic element. In JM, intra-prediction and inter-prediction mode selection algorithm with different criterion is based on RDO(rate distortion optimization) model. One of the parameter of the RDO model is bit rate that is produced by CABAC operator. After intra-prediction or inter-prediction mode selection, the CABAC stream is discard and is recalculated to output stream. The proposed Stream-reuse algorithm puts the stream in memory that is created in mode selection algorithm and reuses it in encoding function. Experiment results show that our proposed algorithm can averagely speed up 17 to 78 MSEL higher speed for QCIF and CIF sequences individually compared with the original algorithm of JM at the cost of only a little memory space. The CABAC was realized in our progressive h.264 encoder.

  2. Adaptive three-dimensional motion-compensated wavelet transform for image sequence coding

    NASA Astrophysics Data System (ADS)

    Leduc, Jean-Pierre

    1994-09-01

    This paper describes a 3D spatio-temporal coding algorithm for the bit-rate compression of digital-image sequences. The coding scheme is based on different specificities namely, a motion representation with a four-parameter affine model, a motion-adapted temporal wavelet decomposition along the motion trajectories and a signal-adapted spatial wavelet transform. The motion estimation is performed on the basis of four-parameter affine transformation models also called similitude. This transformation takes into account translations, rotations and scalings. The temporal wavelet filter bank exploits bi-orthogonal linear-phase dyadic decompositions. The 2D spatial decomposition is based on dyadic signal-adaptive filter banks with either para-unitary or bi-orthogonal bases. The adaptive filtering is carried out according to a performance criterion to be optimized under constraints in order to eventually maximize the compression ratio at the expense of graceful degradations of the subjective image quality. The major principles of the present technique is, in the analysis process, to extract and to separate the motion contained in the sequences from the spatio-temporal redundancy and, in the compression process, to take into account of the rate-distortion function on the basis of the spatio-temporal psycho-visual properties to achieve the most graceful degradations. To complete this description of the coding scheme, the compression procedure is therefore composed of scalar quantizers which exploit the spatio-temporal 3D psycho-visual properties of the Human Visual System and of entropy coders which finalize the bit rate compression.

  3. Effects of selective adaptation on coding sugar and salt tastes in mixtures.

    PubMed

    Frank, Marion E; Goyert, Holly F; Formaker, Bradley K; Hettinger, Thomas P

    2012-10-01

    Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of "salt + sugar" mixtures. In 4 sessions, 16 adapt-test stimulus pairs were presented as atomized, 150-μL "taste puffs" to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, "NaCl + sucrose," and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt-sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations.

  4. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  5. Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki

    One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.

  6. Contour coding based rotating adaptive model for human detection and tracking in thermal catadioptric omnidirectional vision.

    PubMed

    Tang, Yazhe; Li, Youfu

    2012-09-20

    In this paper, we introduce a novel surveillance system based on thermal catadioptric omnidirectional (TCO) vision. The conventional contour-based methods are difficult to be applied to the TCO sensor for detection or tracking purposes due to the distortion of TCO vision. To solve this problem, we propose a contour coding based rotating adaptive model (RAM) that can extract the contour feature from the TCO vision directly as it takes advantage of the relative angle based on the characteristics of TCO vision to change the sequence of sampling automatically. A series of experiments and quantitative analyses verify that the performance of the proposed RAM-based contour coding feature for human detection and tracking are satisfactory in TCO vision.

  7. Long-range accelerated BOTDA sensor using adaptive linear prediction and cyclic coding.

    PubMed

    Muanenda, Yonas; Taki, Mohammad; Pasquale, Fabrizio Di

    2014-09-15

    We propose and experimentally demonstrate a long-range accelerated Brillouin optical time domain analysis (BOTDA) sensor that exploits the complementary noise reduction benefits of adaptive linear prediction and optical pulse coding. The combined technique allows using orders of magnitude less the number of averages of the backscattered BOTDA traces compared to a standard single pulse BOTDA, enabling distributed strain measurement over 10 km of a standard single mode fiber with meter-scale spatial resolution and 1.8 MHz Brillouin frequency shift resolution. By optimizing the system parameters, the measurement is achieved with only 20 averages for each Brillouin gain spectrum scanned frequency, allowing for an eight times faster strain measurement compared to the use of cyclic pulse coding alone.

  8. Global error estimation based on the tolerance proportionality for some adaptive Runge-Kutta codes

    NASA Astrophysics Data System (ADS)

    Calvo, M.; González-Pinto, S.; Montijano, J. I.

    2008-09-01

    Modern codes for the numerical solution of Initial Value Problems (IVPs) in ODEs are based in adaptive methods that, for a user supplied tolerance [delta], attempt to advance the integration selecting the size of each step so that some measure of the local error is [similar, equals][delta]. Although this policy does not ensure that the global errors are under the prescribed tolerance, after the early studies of Stetter [Considerations concerning a theory for ODE-solvers, in: R. Burlisch, R.D. Grigorieff, J. Schröder (Eds.), Numerical Treatment of Differential Equations, Proceedings of Oberwolfach, 1976, Lecture Notes in Mathematics, vol. 631, Springer, Berlin, 1978, pp. 188-200; Tolerance proportionality in ODE codes, in: R. März (Ed.), Proceedings of the Second Conference on Numerical Treatment of Ordinary Differential Equations, Humbold University, Berlin, 1980, pp. 109-123] and the extensions of Higham [Global error versus tolerance for explicit Runge-Kutta methods, IMA J. Numer. Anal. 11 (1991) 457-480; The tolerance proportionality of adaptive ODE solvers, J. Comput. Appl. Math. 45 (1993) 227-236; The reliability of standard local error control algorithms for initial value ordinary differential equations, in: Proceedings: The Quality of Numerical Software: Assessment and Enhancement, IFIP Series, Springer, Berlin, 1997], it has been proved that in many existing explicit Runge-Kutta codes the global errors behave asymptotically as some rational power of [delta]. This step-size policy, for a given IVP, determines at each grid point tn a new step-size hn+1=h(tn;[delta]) so that h(t;[delta]) is a continuous function of t. In this paper a study of the tolerance proportionality property under a discontinuous step-size policy that does not allow to change the size of the step if the step-size ratio between two consecutive steps is close to unity is carried out. This theory is applied to obtain global error estimations in a few problems that have been solved with

  9. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  10. Flexible Coding of Task Rules in Frontoparietal Cortex: An Adaptive System for Flexible Cognitive Control.

    PubMed

    Woolgar, Alexandra; Afshar, Soheil; Williams, Mark A; Rich, Anina N

    2015-10-01

    How do our brains achieve the cognitive control that is required for flexible behavior? Several models of cognitive control propose a role for frontoparietal cortex in the structure and representation of task sets or rules. For behavior to be flexible, however, the system must also rapidly reorganize as mental focus changes. Here we used multivoxel pattern analysis of fMRI data to demonstrate adaptive reorganization of frontoparietal activity patterns following a change in the complexity of the task rules. When task rules were relatively simple, frontoparietal cortex did not hold detectable information about these rules. In contrast, when the rules were more complex, frontoparietal cortex showed clear and decodable rule discrimination. Our data demonstrate that frontoparietal activity adjusts to task complexity, with better discrimination of rules that are behaviorally more confusable. The change in coding was specific to the rule element of the task and was not mirrored in more specialized cortex (early visual cortex) where coding was independent of difficulty. In line with an adaptive view of frontoparietal function, the data suggest a system that rapidly reconfigures in accordance with the difficulty of a behavioral task. This system may provide a neural basis for the flexible control of human behavior.

  11. Complexity modeling for context-based adaptive binary arithmetic coding (CABAC) in H.264/AVC decoder

    NASA Astrophysics Data System (ADS)

    Lee, Szu-Wei; Kuo, C.-C. Jay

    2007-09-01

    One way to save the power consumption in the H.264 decoder is for the H.264 encoder to generate decoderfriendly bit streams. By following this idea, a decoding complexity model of context-based adaptive binary arithmetic coding (CABAC) for H.264/AVC is investigated in this research. Since different coding modes will have an impact on the number of quantized transformed coeffcients (QTCs) and motion vectors (MVs) and, consequently, the complexity of entropy decoding, the encoder with a complexity model can estimate the complexity of entropy decoding and choose the best coding mode to yield the best tradeoff between the rate, distortion and decoding complexity performance. The complexity model consists of two parts: one for source data (i.e. QTCs) and the other for header data (i.e. the macro-block (MB) type and MVs). Thus, the proposed CABAC decoding complexity model of a MB is a function of QTCs and associated MVs, which is verified experimentally. The proposed CABAC decoding complexity model can provide good estimation results for variant bit streams. Practical applications of this complexity model will also be discussed.

  12. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  13. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  14. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  15. Hyperspectral image compression: adapting SPIHT and EZW to anisotropic 3-D wavelet coding.

    PubMed

    Christophe, Emmanuel; Mailhes, Corinne; Duhamel, Pierre

    2008-12-01

    Hyperspectral images present some specific characteristics that should be used by an efficient compression system. In compression, wavelets have shown a good adaptability to a wide range of data, while being of reasonable complexity. Some wavelet-based compression algorithms have been successfully used for some hyperspectral space missions. This paper focuses on the optimization of a full wavelet compression system for hyperspectral images. Each step of the compression algorithm is studied and optimized. First, an algorithm to find the optimal 3-D wavelet decomposition in a rate-distortion sense is defined. Then, it is shown that a specific fixed decomposition has almost the same performance, while being more useful in terms of complexity issues. It is shown that this decomposition significantly improves the classical isotropic decomposition. One of the most useful properties of this fixed decomposition is that it allows the use of zero tree algorithms. Various tree structures, creating a relationship between coefficients, are compared. Two efficient compression methods based on zerotree coding (EZW and SPIHT) are adapted on this near-optimal decomposition with the best tree structure found. Performances are compared with the adaptation of JPEG 2000 for hyperspectral images on six different areas presenting different statistical properties.

  16. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  17. Coding and decoding with adapting neurons: a population approach to the peri-stimulus time histogram.

    PubMed

    Naud, Richard; Gerstner, Wulfram

    2012-01-01

    The response of a neuron to a time-dependent stimulus, as measured in a Peri-Stimulus-Time-Histogram (PSTH), exhibits an intricate temporal structure that reflects potential temporal coding principles. Here we analyze the encoding and decoding of PSTHs for spiking neurons with arbitrary refractoriness and adaptation. As a modeling framework, we use the spike response model, also known as the generalized linear neuron model. Because of refractoriness, the effect of the most recent spike on the spiking probability a few milliseconds later is very strong. The influence of the last spike needs therefore to be described with high precision, while the rest of the neuronal spiking history merely introduces an average self-inhibition or adaptation that depends on the expected number of past spikes but not on the exact spike timings. Based on these insights, we derive a 'quasi-renewal equation' which is shown to yield an excellent description of the firing rate of adapting neurons. We explore the domain of validity of the quasi-renewal equation and compare it with other rate equations for populations of spiking neurons. The problem of decoding the stimulus from the population response (or PSTH) is addressed analogously. We find that for small levels of activity and weak adaptation, a simple accumulator of the past activity is sufficient to decode the original input, but when refractory effects become large decoding becomes a non-linear function of the past activity. The results presented here can be applied to the mean-field analysis of coupled neuron networks, but also to arbitrary point processes with negative self-interaction.

  18. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  19. Image sensor system with bio-inspired efficient coding and adaptation.

    PubMed

    Okuno, Hirotsugu; Yagi, Tetsuya

    2012-08-01

    We designed and implemented an image sensor system equipped with three bio-inspired coding and adaptation strategies: logarithmic transform, local average subtraction, and feedback gain control. The system comprises a field-programmable gate array (FPGA), a resistive network, and active pixel sensors (APS), whose light intensity-voltage characteristics are controllable. The system employs multiple time-varying reset voltage signals for APS in order to realize multiple logarithmic intensity-voltage characteristics, which are controlled so that the entropy of the output image is maximized. The system also employs local average subtraction and gain control in order to obtain images with an appropriate contrast. The local average is calculated by the resistive network instantaneously. The designed system was successfully used to obtain appropriate images of objects that were subjected to large changes in illumination.

  20. MPI parallelization of full PIC simulation code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Matsui, Tatsuki; Nunami, Masanori; Usui, Hideyuki; Moritaka, Toseo

    2010-11-01

    A new parallelization technique developed for PIC method with adaptive mesh refinement (AMR) is introduced. In AMR technique, the complicated cell arrangements are organized and managed as interconnected pointers with multiple resolution levels, forming a fully threaded tree structure as a whole. In order to retain this tree structure distributed over multiple processes, remote memory access, an extended feature of MPI2 standards, is employed. Another important feature of the present simulation technique is the domain decomposition according to the modified Morton ordering. This algorithm can group up the equal number of particle calculation loops, which allows for the better load balance. Using this advanced simulation code, preliminary results for basic physical problems are exhibited for the validity check, together with the benchmarks to test the performance and the scalability.

  1. Development of a framework and coding system for modifications and adaptations of evidence-based interventions

    PubMed Central

    2013-01-01

    Background Evidence-based interventions are frequently modified or adapted during the implementation process. Changes may be made to protocols to meet the needs of the target population or address differences between the context in which the intervention was originally designed and the one into which it is implemented [Addict Behav 2011, 36(6):630–635]. However, whether modification compromises or enhances the desired benefits of the intervention is not well understood. A challenge to understanding the impact of specific types of modifications is a lack of attention to characterizing the different types of changes that may occur. A system for classifying the types of modifications that are made when interventions and programs are implemented can facilitate efforts to understand the nature of modifications that are made in particular contexts as well as the impact of these modifications on outcomes of interest. Methods We developed a system for classifying modifications made to interventions and programs across a variety of fields and settings. We then coded 258 modifications identified in 32 published articles that described interventions implemented in routine care or community settings. Results We identified modifications made to the content of interventions, as well as to the context in which interventions are delivered. We identified 12 different types of content modifications, and our coding scheme also included ratings for the level at which these modifications were made (ranging from the individual patient level up to a hospital network or community). We identified five types of contextual modifications (changes to the format, setting, or patient population that do not in and of themselves alter the actual content of the intervention). We also developed codes to indicate who made the modifications and identified a smaller subset of modifications made to the ways that training or evaluations occur when evidence-based interventions are implemented. Rater

  2. WHITE DWARF MERGERS ON ADAPTIVE MESHES. I. METHODOLOGY AND CODE VERIFICATION

    SciTech Connect

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-10

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  3. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  4. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Briones, Janette C.; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was con- ducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round- trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  5. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  6. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-08-12

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  7. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  8. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  9. Coding and adaptation during mechanical stimulation in the leech nervous system.

    PubMed

    Pinato, G; Torre, V

    2000-12-15

    The experiments described here were designed to characterise sensory coding and adaptation during mechanical stimulation in the leech (Hirudo medicinalis). A chain of three ganglia and a segment of the body wall connected to the central ganglion were used. Eight extracellular suction pipettes and one or two intracellular electrodes were used to record action potentials from all mechanosensory neurones of the three ganglia. When the skin of the body wall was briefly touched with a filament exerting a force of about 2 mN, touch (T) cells in the central ganglion, but also those in adjacent ganglia (i.e. anterior and posterior), fired one or two action potentials. However, the threshold for action potential initiation was lower for T cells in the central ganglion than for those in adjacent ganglia. The timing of the first evoked action potential in a T cell was very reproducible with a jitter often lower than 100 us. Action potentials in T cells were not significantly correlated. When the force exerted by the filament was increased above 20 mN, pressure (P) cells in the central and neighbouring ganglia fired action potentials. Action potentials in P cells usually followed those evoked in T cells with a delay of about 20 ms and had a larger jitter of 0.5-10 ms. With stronger stimulations exceeding 50 mN, noxious (N) cells also fired action potentials. With such stimulations the majority of mechanosensory neurones in the three ganglia fired action potentials. The spatial properties of the whole receptive field of the mechanosensory neurones were explored by touching different parts of the skin. When the mechanical stimulation was applied for a longer time, i.e. 1 s, only P cells in the central ganglion continued to fire action potentials. P cells in neighbouring ganglia fully adapted after firing two or three action potentials.P cells in adjacent ganglia, having fully adapted to a steady mechanical stimulation of one part of the skin, fired action potentials following

  10. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  11. Computationally Efficient Blind Code Synchronization for Asynchronous DS-CDMA Systems with Adaptive Antenna Arrays

    NASA Astrophysics Data System (ADS)

    Hu, Chia-Chang

    2005-12-01

    A novel space-time adaptive near-far robust code-synchronization array detector for asynchronous DS-CDMA systems is developed in this paper. There are the same basic requirements that are needed by the conventional matched filter of an asynchronous DS-CDMA system. For the real-time applicability, a computationally efficient architecture of the proposed detector is developed that is based on the concept of the multistage Wiener filter (MWF) of Goldstein and Reed. This multistage technique results in a self-synchronizing detection criterion that requires no inversion or eigendecomposition of a covariance matrix. As a consequence, this detector achieves a complexity that is only a linear function of the size of antenna array ([InlineEquation not available: see fulltext.]), the rank of the MWF ([InlineEquation not available: see fulltext.]), the system processing gain ([InlineEquation not available: see fulltext.]), and the number of samples in a chip interval ([InlineEquation not available: see fulltext.]), that is,[InlineEquation not available: see fulltext.]. The complexity of the equivalent detector based on the minimum mean-squared error (MMSE) or the subspace-based eigenstructure analysis is a function of[InlineEquation not available: see fulltext.]. Moreover, this multistage scheme provides a rapid adaptive convergence under limited observation-data support. Simulations are conducted to evaluate the performance and convergence behavior of the proposed detector with the size of the[InlineEquation not available: see fulltext.]-element antenna array, the amount of the[InlineEquation not available: see fulltext.]-sample support, and the rank of the[InlineEquation not available: see fulltext.]-stage MWF. The performance advantage of the proposed detector over other DS-CDMA detectors is investigated as well.

  12. Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.

    2004-01-01

    In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a

  13. Performance of an adaptive coding scheme in a fixed wireless cellular system working in millimeter-wave bands

    NASA Astrophysics Data System (ADS)

    Farahvash, Shayan; Akhavan, Koorosh; Kavehrad, Mohsen

    1999-12-01

    This paper presents a solution to problem of providing bit- error rate performance guarantees in a fixed millimeter-wave wireless system, such as local multi-point distribution system in line-of-sight or nearly line-of-sight applications. The basic concept is to take advantage of slow-fading behavior of fixed wireless channel by changing the transmission code rate. Rate compatible punctured convolutional codes are used to implement adaptive coding. Cochannel interference analysis is carried out for downlink direction; from base station to subscriber premises. Cochannel interference is treated as a noise-like random process with a power equal to the sum of the power from finite number of interfering base stations. Two different cellular architectures based on using single or dual polarizations are investigated. Average spectral efficiency of the proposed adaptive rate system is found to be at least 3 times larger than a fixed rate system with similar outage requirements.

  14. Reduced adaptability, but no fundamental disruption, of norm-based face-coding mechanisms in cognitively able children and adolescents with autism.

    PubMed

    Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby

    2014-09-01

    Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience.

  15. Phase-shifting profilometry combined with Gray-code patterns projection: unwrapping error removal by an adaptive median filter.

    PubMed

    Zheng, Dongliang; Da, Feipeng; Kemao, Qian; Seah, Hock Soon

    2017-03-06

    Phase-shifting profilometry combined with Gray-code patterns projection has been widely used for 3D measurement. In this technique, a phase-shifting algorithm is used to calculate the wrapped phase, and a set of Gray-code binary patterns is used to determine the unwrapped phase. In the real measurement, the captured Gray-code patterns are no longer binary, resulting in phase unwrapping errors at a large number of erroneous pixels. Although this problem has been attended and well resolved by a few methods, it remains challenging when a measured object has step-heights and the captured patterns contain invalid pixels. To effectively remove unwrapping errors and simultaneously preserve step-heights, in this paper, an effective method using an adaptive median filter is proposed. Both simulations and experiments can demonstrate its effectiveness.

  16. SSPARAMA: A Nonlinear, Wave Optics Multipulse (and CW) Steady-State Propagation Code with Adaptive Coordinates

    DTIC Science & Technology

    1977-02-10

    MADEIR, AND U[IItC1I1 aperture as the pulse under study begins to propagate. The solution to Eq. (8) il ob- tained subject to the energy...Herrmann of Lincoln Laboratory, who studied the propagation of a CW infinite Gauuian with a C-2 diameter of 70 cm. The absorption coefficient was 0.07 km -1...mnitiazatin rure coline with a call to UTUT tsore vio alerue untity calcult aeamed. int * The call to DNC aomppes the urie trno ofi Eq. (6) and then he

  17. Calculate waveguide aperture susceptance

    NASA Astrophysics Data System (ADS)

    Kwon, J.-K.; Ishii, T. K.

    1982-12-01

    A method is developed for calculating aperture susceptance which makes use of the distribution of an aperture's local fields. This method can be applied to the computation of the aperture susceptance of irises, as well as the calculation of the susceptances of waveguide filters, aperture antennas, waveguide cavity coupling, waveguide junctions, and heterogeneous boundaries such as inputs to ferrite or dielectric loaded waveguides. This method assumes a local field determined by transverse components of the incident wave in the local surface of the cross section in the discontinuity plane which lies at the aperture. The aperture susceptance is calculated by the use of the local fields, the law of energy conservation, and the principles of continuity of the fields. This method requires that the thickness of the aperture structure be zero, but this does not limit the practical usefulness of this local-field method.

  18. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  19. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution.

  20. Development of an Adaptive Boundary-Fitted Coordinate Code for Use in Coastal and Estuarine Areas.

    DTIC Science & Technology

    1985-09-01

    34 Miscellaneous Paper HL-80-3, US Army Engineer Waterways Experiment Station, Vicksburg, Miss. Johnson, B. H., Thompson , J . F ., and Baker, A. J. 1984. "A...34 prepared for CERC, US Army Engineer Water- ways Experiment Station, Vicksburg, Miss. Thompson , J . F . 1983. "A Boundary-Fitted Coordinate Code for...Vol 1. Thompson , J . F ., Thames, F. C., and Mastin, C. W. 1977. "TOMCAT - A Code for Numerical Generation Systems on Fields Containing Any Number of

  1. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.

  2. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  3. Variable-aperture screen

    DOEpatents

    Savage, George M.

    1991-01-01

    Apparatus for separating material into first and second portions according to size including a plurality of shafts, a plurality of spaced disks radiating outwardly from each of the shafts to define apertures and linkage interconnecting the shafts for moving the shafts toward or away from one another to vary the size of the apertures while the apparatus is performing the separating function.

  4. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  5. Rotating Aperture System

    DOEpatents

    Rusnak, Brian; Hall, James M.; Shen, Stewart; Wood, Richard L.

    2005-01-18

    A rotating aperture system includes a low-pressure vacuum pumping stage with apertures for passage of a deuterium beam. A stator assembly includes holes for passage of the beam. The rotor assembly includes a shaft connected to a deuterium gas cell or a crossflow venturi that has a single aperture on each side that together align with holes every rotation. The rotating apertures are synchronized with the firing of the deuterium beam such that the beam fires through a clear aperture and passes into the Xe gas beam stop. Portions of the rotor are lapped into the stator to improve the sealing surfaces, to prevent rapid escape of the deuterium gas from the gas cell.

  6. 47 CFR 25.134 - Licensing provisions for Very Small Aperture Terminal (VSAT) and C-band Small Aperture Terminal...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... network using a code division multiple access (CDMA) technique, N is the maximum number of co-frequency... Terminal (VSAT) and C-band Small Aperture Terminal (CSAT) networks. 25.134 Section 25.134 Telecommunication...) and C-band Small Aperture Terminal (CSAT) networks. (a)(1) [Reserved] (2) Large Networks of...

  7. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  8. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  9. An adaptive scan of high frequency subbands for dyadic intra frame in MPEG4-AVC/H.264 scalable video coding

    NASA Astrophysics Data System (ADS)

    Shahid, Z.; Chaumont, M.; Puech, W.

    2009-01-01

    This paper develops a new adaptive scanning methodology for intra frame scalable coding framework based on a subband/wavelet(DWTSB) coding approach for MPEG-4 AVC/H.264 scalable video coding (SVC). It attempts to take advantage of the prior knowledge of the frequencies which are present in different higher frequency subbands. We propose dyadic intra frame coding method with adaptive scan (DWTSB-AS) for each subband as traditional zigzag scan is not suitable for high frequency subbands. Thus, by just modification of the scan order of the intra frame scalable coding framework of H.264, we can get better compression. The proposed algorithm has been theoretically justified and is thoroughly evaluated against the current SVC test model JSVM and DWTSB through extensive coding experiments for scalable coding of intra frame. The simulation results show the proposed scanning algorithm consistently outperforms JSVM and DWTSB in PSNR performance. This results in extra compression for intra frames, along with spatial scalability. Thus Image and video coding applications, traditionally serviced by separate coders, can be efficiently provided by an integrated coding system.

  10. Perceiving Affordances for Fitting through Apertures

    ERIC Educational Resources Information Center

    Ishak, Shaziela; Adolph, Karen E.; Lin, Grace C.

    2008-01-01

    Affordances--possibilities for action--are constrained by the match between actors and their environments. For motor decisions to be adaptive, affordances must be detected accurately. Three experiments examined the correspondence between motor decisions and affordances as participants reached through apertures of varying size. A psychophysical…

  11. Sub-Aperture Interferometers

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2010-01-01

    Sub-aperture interferometers -- also called wavefront-split interferometers -- have been developed for simultaneously measuring displacements of multiple targets. The terms "sub-aperture" and "wavefront-split" signify that the original measurement light beam in an interferometer is split into multiple sub-beams derived from non-overlapping portions of the original measurement-beam aperture. Each measurement sub-beam is aimed at a retroreflector mounted on one of the targets. The splitting of the measurement beam is accomplished by use of truncated mirrors and masks, as shown in the example below

  12. Distributed aperture synthesis.

    PubMed

    Rabb, David; Jameson, Douglas; Stokes, Andrew; Stafford, Jason

    2010-05-10

    Distributed aperture synthesis is an exciting technique for recovering high-resolution images from an array of small telescopes. Such a system requires optical field values measured at individual apertures to be phased together so that a single, high-resolution image can be synthesized. This paper describes the application of sharpness metrics to the process of phasing multiple coherent imaging systems into a single high-resolution system. Furthermore, this paper will discuss hardware and present the results of simulations and experiments which will illustrate how aperture synthesis is performed.

  13. Bistatic synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Yates, Gillian

    Synthetic aperture radar (SAR) allows all-weather, day and night, surface surveillance and has the ability to detect, classify and geolocate objects at long stand-off ranges. Bistatic SAR, where the transmitter and the receiver are on separate platforms, is seen as a potential means of countering the vulnerability of conventional monostatic SAR to electronic countermeasures, particularly directional jamming, and avoiding physical attack of the imaging platform. As the receiving platform can be totally passive, it does not advertise its position by RF emissions. The transmitter is not susceptible to jamming and can, for example, operate at long stand-off ranges to reduce its vulnerability to physical attack. This thesis examines some of the complications involved in producing high-resolution bistatic SAR imagery. The effect of bistatic operation on resolution is examined from a theoretical viewpoint and analytical expressions for resolution are developed. These expressions are verified by simulation work using a simple 'point by point' processor. This work is extended to look at using modern practical processing engines for bistatic geometries. Adaptations of the polar format algorithm and range migration algorithm are considered. The principal achievement of this work is a fully airborne demonstration of bistatic SAR. The route taken in reaching this is given, along with some results. The bistatic SAR imagery is analysed and compared to the monostatic imagery collected at the same time. Demonstrating high-resolution bistatic SAR imagery using two airborne platforms represents what I believe to be a European first and is likely to be the first time that this has been achieved outside the US (the UK has very little insight into US work on this topic). Bistatic target characteristics are examined through the use of simulations. This also compares bistatic imagery with monostatic and gives further insight into the utility of bistatic SAR.

  14. N-body simulations for f(R) gravity using a self-adaptive particle-mesh code

    SciTech Connect

    Zhao Gongbo; Koyama, Kazuya; Li Baojiu

    2011-02-15

    We perform high-resolution N-body simulations for f(R) gravity based on a self-adaptive particle-mesh code MLAPM. The chameleon mechanism that recovers general relativity on small scales is fully taken into account by self-consistently solving the nonlinear equation for the scalar field. We independently confirm the previous simulation results, including the matter power spectrum, halo mass function, and density profiles, obtained by Oyaizu et al.[Phys. Rev. D 78, 123524 (2008)] and Schmidt et al.[Phys. Rev. D 79, 083518 (2009)], and extend the resolution up to k{approx}20 h/Mpc for the measurement of the matter power spectrum. Based on our simulation results, we discuss how the chameleon mechanism affects the clustering of dark matter and halos on full nonlinear scales.

  15. Design of signal-adapted multidimensional lifting scheme for lossy coding.

    PubMed

    Gouze, Annabelle; Antonini, Marc; Barlaud, Michel; Macq, Benoît

    2004-12-01

    This paper proposes a new method for the design of lifting filters to compute a multidimensional nonseparable wavelet transform. Our approach is stated in the general case, and is illustrated for the 2-D separable and for the quincunx images. Results are shown for the JPEG2000 database and for satellite images acquired on a quincunx sampling grid. The design of efficient quincunx filters is a difficult challenge which has already been addressed for specific cases. Our approach enables the design of less expensive filters adapted to the signal statistics to enhance the compression efficiency in a more general case. It is based on a two-step lifting scheme and joins the lifting theory with Wiener's optimization. The prediction step is designed in order to minimize the variance of the signal, and the update step is designed in order to minimize a reconstruction error. Application for lossy compression shows the performances of the method.

  16. Variable-aperture screen

    DOEpatents

    Savage, G.M.

    1991-10-29

    Apparatus is described for separating material into first and second portions according to size including a plurality of shafts, a plurality of spaced disks radiating outwardly from each of the shafts to define apertures and linkage interconnecting the shafts for moving the shafts toward or away from one another to vary the size of the apertures while the apparatus is performing the separating function. 10 figures.

  17. Active aperture phased arrays

    NASA Astrophysics Data System (ADS)

    Shenoy, R. P.

    1989-04-01

    Developments towards the realization of active aperture phased arrays are reviewed. The technology and cost aspects of the power amplifier and phase shifter subsystems are discussed. Consideration is given to research concerning T/R modules, MESFETs, side lobe control, beam steering, optical control techniques, and printed circuit antennas. Methods for configuring the array are examined, focusing on the tile and brick configurations. It is found that there is no technological impediment for introducing active aperture phased arrays.

  18. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  19. A new type of color-coded light structures for an adapted and rapid determination of point correspondences for 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Caulier, Yannick; Bernhard, Luc; Spinnler, Klaus

    2011-05-01

    This paper proposes a new type of color coded light structures for the inspection of complex moving objects. The novelty of the methods relies on the generation of free-form color patterns permitting the projection of color structures adapted to the geometry of the surfaces to be characterized. The point correspondence determination algorithm consists of a stepwise procedure involving simple and computationally fast methods. The algorithm is therefore robust against varying recording conditions typically arising in real-time quality control environments and can be further integrated for industrial inspection purposes. The proposed approach is validated and compared on the basis of different experimentations concerning the 3D surface reconstruction by projecting adapted spatial color coded patterns. It is demonstrated that in case of certain inspection requirements, the method permits to code more reference points that similar color coded matrix methods.

  20. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  1. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-04-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering--CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes--MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme.

  2. The AdaptiSPECT Imaging Aperture

    PubMed Central

    Chaix, Cécile; Moore, Jared W.; Van Holen, Roel; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    In this paper, we present the imaging aperture of an adaptive SPECT imaging system being developed at the Center for Gamma Ray Imaging (AdaptiSPECT). AdaptiSPECT is designed to automatically change its configuration in response to preliminary data, in order to improve image quality for a particular task. In a traditional pinhole SPECT imaging system, the characteristics (magnification, resolution, field of view) are set by the geometry of the system, and any modification can be accomplished only by manually changing the collimator and the distance of the detector to the center of the field of view. Optimization of the imaging system for a specific task on a specific individual is therefore difficult. In an adaptive SPECT imaging system, on the other hand, the configuration can be conveniently changed under computer control. A key component of an adaptive SPECT system is its aperture. In this paper, we present the design, specifications, and fabrication of the adaptive pinhole aperture that will be used for AdaptiSPECT, as well as the controls that enable autonomous adaptation. PMID:27019577

  3. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  4. Differential Synthetic Aperture Ladar

    SciTech Connect

    Stappaerts, E A; Scharlemann, E

    2005-02-07

    We report a differential synthetic aperture ladar (DSAL) concept that relaxes platform and laser requirements compared to conventional SAL. Line-of-sight translation/vibration constraints are reduced by several orders of magnitude, while laser frequency stability is typically relaxed by an order of magnitude. The technique is most advantageous for shorter laser wavelengths, ultraviolet to mid-infrared. Analytical and modeling results, including the effect of speckle and atmospheric turbulence, are presented. Synthetic aperture ladars are of growing interest, and several theoretical and experimental papers have been published on the subject. Compared to RF synthetic aperture radar (SAR), platform/ladar motion and transmitter bandwidth constraints are especially demanding at optical wavelengths. For mid-IR and shorter wavelengths, deviations from a linear trajectory along the synthetic aperture length have to be submicron, or their magnitude must be measured to that precision for compensation. The laser coherence time has to be the synthetic aperture transit time, or transmitter phase has to be recorded and a correction applied on detection.

  5. First Clinical Release of an Online, Adaptive, Aperture-Based Image-Guided Radiotherapy Strategy in Intensity-Modulated Radiotherapy to Correct for Inter- and Intrafractional Rotations of the Prostate

    SciTech Connect

    Deutschmann, Heinz; Kametriser, Gerhard; Steininger, Philipp; Scherer, Philipp; Schoeller, Helmut; Gaisberger, Christoph; Mooslechner, Michaela; Mitterlechner, Bernhard; Weichenberger, Harald; Fastner, Gert; Wurstbauer, Karl; Jeschke, Stephan; Forstner, Rosemarie; Sedlmayer, Felix

    2012-08-01

    Purpose: We developed and evaluated a correction strategy for prostate rotations using direct adaptation of segments in intensity-modulated radiotherapy (IMRT). Method and Materials: Implanted fiducials (four gold markers) were used to determine interfractional translations, rotations, and dilations of the prostate. We used hybrid imaging: The markers were automatically detected in two pretreatment planar X-ray projections; their actual position in three-dimensional space was reconstructed from these images at first. The structure set comprising prostate, seminal vesicles, and adjacent rectum wall was transformed accordingly in 6 degrees of freedom. Shapes of IMRT segments were geometrically adapted in a class solution forward-planning approach, derived within seconds on-site and treated immediately. Intrafractional movements were followed in MV electronic portal images captured on the fly. Results: In 31 of 39 patients, for 833 of 1013 fractions (supine, flat couch, knee support, comfortably full bladder, empty rectum, no intraprostatic marker migrations >2 mm of more than one marker), the online aperture adaptation allowed safe reduction of margins clinical target volume-planning target volume (prostate) down to 5 mm when only interfractional corrections were applied: Dominant L-R rotations were found to be 5.3 Degree-Sign (mean of means), standard deviation of means {+-}4.9 Degree-Sign , maximum at 30.7 Degree-Sign . Three-dimensional vector translations relative to skin markings were 9.3 {+-} 4.4 mm (maximum, 23.6 mm). Intrafractional movements in 7.7 {+-} 1.5 min (maximum, 15.1 min) between kV imaging and last beam's electronic portal images showed further L-R rotations of 2.5 Degree-Sign {+-} 2.3 Degree-Sign (maximum, 26.9 Degree-Sign ), and three-dimensional vector translations of 3.0 {+-}3.7 mm (maximum, 10.2 mm). Addressing intrafractional errors could further reduce margins to 3 mm. Conclusion: We demonstrated the clinical feasibility of an online

  6. Advanced Multiple Aperture Seeing Profiler

    NASA Astrophysics Data System (ADS)

    Ren, Deqing; Zhao, Gang

    2016-10-01

    Measurements of the seeing profile of the atmospheric turbulence as a function of altitude are crucial for solar astronomical site characterization, as well as the optimized design and performance estimation of solar Multi-Conjugate Adaptive Optics (MCAO). Knowledge of the seeing distribution, up to 30 km, with a potential new solar observation site, is required for future solar MCAO developments. Current optical seeing profile measurement techniques are limited by the need to use a large facility solar telescope for such seeing profile measurements, which is a serious limitation on characterizing a site's seeing conditions in terms of the seeing profile. Based on our previous work, we propose a compact solar seeing profiler called the Advanced Multiple Aperture Seeing Profile (A-MASP). A-MASP consists of two small telescopes, each with a 100 mm aperture. The two small telescopes can be installed on a commercial computerized tripod to track solar granule structures for seeing profile measurement. A-MASP is extreme simple and portable, which makes it an ideal system to bring to a potential new site for seeing profile measurements.

  7. Image set based face recognition using self-regularized non-negative coding and adaptive distance metric learning.

    PubMed

    Mian, Ajmal; Hu, Yiqun; Hartley, Richard; Owens, Robyn

    2013-12-01

    Simple nearest neighbor classification fails to exploit the additional information in image sets. We propose self-regularized nonnegative coding to define between set distance for robust face recognition. Set distance is measured between the nearest set points (samples) that can be approximated from their orthogonal basis vectors as well as from the set samples under the respective constraints of self-regularization and nonnegativity. Self-regularization constrains the orthogonal basis vectors to be similar to the approximated nearest point. The nonnegativity constraint ensures that each nearest point is approximated from a positive linear combination of the set samples. Both constraints are formulated as a single convex optimization problem and the accelerated proximal gradient method with linear-time Euclidean projection is adapted to efficiently find the optimal nearest points between two image sets. Using the nearest points between a query set and all the gallery sets as well as the active samples used to approximate them, we learn a more discriminative Mahalanobis distance for robust face recognition. The proposed algorithm works independently of the chosen features and has been tested on gray pixel values and local binary patterns. Experiments on three standard data sets show that the proposed method consistently outperforms existing state-of-the-art methods.

  8. Apodizer aperture for lasers

    DOEpatents

    Jorna, Siebe; Siebert, Larry D.; Brueckner, Keith A.

    1976-11-09

    An aperture attenuator for use with high power lasers which includes glass windows shaped and assembled to form an annulus chamber which is filled with a dye solution. The annulus chamber is shaped such that the section in alignment with the axis of the incident beam follows a curve which is represented by the equation y = (r - r.sub.o).sup.n.

  9. Synthetic Aperture Radar Interferometry

    NASA Technical Reports Server (NTRS)

    Rosen, P. A.; Hensley, S.; Joughin, I. R.; Li, F.; Madsen, S. N.; Rodriguez, E.; Goldstein, R. M.

    1998-01-01

    Synthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface, its changes over time, and other changes in the detailed characteristics of the surface. This paper reviews the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering.

  10. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  11. Coded source neutron imaging

    NASA Astrophysics Data System (ADS)

    Bingham, Philip; Santos-Villalobos, Hector; Tobin, Ken

    2011-03-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100μm) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100μm and 10μm aperture hole diameters show resolutions matching the hole diameters.

  12. The neural code for taste in the nucleus of the solitary tract of the rat: effects of adaptation.

    PubMed

    Di Lorenzo, P M; Lemon, C H

    2000-01-10

    Adaptation of the tongue to NaCl, HCl, quinine or sucrose was used as a tool to study the stability and organization of response profiles in the nucleus of the solitary tract (NTS). Taste responses in the NTS were recorded in anesthetized rats before and after adaptation of the tongue to NaCl, HCl, sucrose or quinine. Results showed that the magnitude of response to test stimuli following adaptation was a function of the context, i.e., adaptation condition, in which the stimuli were presented. Over half of all taste responses were either attenuated or enhanced following the adaptation procedure: NaCl adaptation produced the most widespread, non-stimulus-selective cross-adaptation and sucrose adaptation produced the least frequent cross-adaptation and the most frequent enhancement of taste responses. Adaptation to quinine cross-adapted to sucrose and adaptation to HCl cross-adapted to quinine in over half of the units tested. The adaptation procedure sometimes unmasked taste responses where none were present beforehand and sometimes altered taste responses to test stimuli even though the adapting stimulus did not itself produce a response. These effects demonstrated a form of context-dependency of taste responsiveness in the NTS and further suggest a broad potentiality in the sensitivity of NTS units across taste stimuli. Across unit patterns of response remained distinct from each other under all adaptation conditions. Discriminability of these patterns may provide a neurophysiological basis for residual psychophysical abilities following adaptation.

  13. Measurement and simulation of apertures on Z hohlraums

    SciTech Connect

    Chrien, R.E.; Matuska, W. Jr.; Swenson, F.J.

    1998-12-01

    The authors have performed aperture measurements and simulations for vacuum hohlraums heated by wire array implosions. A low-Z plastic coating is often applied to the aperture to create a high ablation pressure which retards the expansion of the gold hohlraum wall. However, this interface is unstable and may be subjects to the development of highly nonlinear perturbations (jets) as a result of shocks converging near the edge of the aperture. These experiments have been simulated using Lagrangian and Eulerian radiation hydrodynamics codes.

  14. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  15. Adaptive Techniques for Large Space Apertures.

    DTIC Science & Technology

    1980-03-01

    GP Anitern, 200l de or,~,’rn, receiner/ proceso 1"’ - requires enternal !titude deter- minationr such as a star tracker Increased mechanizatio’ Sensor...control systems into one unit; namely, a fine pointing control using the gimbal rates as the control variables while maintaining constant rotor speeds...CMG mode), and a coarse control for large maneuvers using the rotor speeds as the control variables and locking the gimbals (RW mode). The simultaneous

  16. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  17. Configurable Aperture Space Telescope

    NASA Technical Reports Server (NTRS)

    Ennico, Kimberly; Vassigh, Kenny; Bendek, Selman; Young, Zion W; Lynch, Dana H.

    2015-01-01

    In December 2014, we were awarded Center Innovation Fund to evaluate an optical and mechanical concept for a novel implementation of a segmented telescope based on modular, interconnected small sats (satlets). The concept is called CAST, a Configurable Aperture Space Telescope. With a current TRL is 2 we will aim to reach TLR 3 in Sept 2015 by demonstrating a 2x2 mirror system to validate our optical model and error budget, provide strawman mechanical architecture and structural damping analyses, and derive future satlet-based observatory performance requirements. CAST provides an alternative access to visible andor UV wavelength space telescope with 1-meter or larger aperture for NASA SMD Astrophysics and Planetary Science community after the retirement of HST.

  18. Configurable Aperture Space Telescope

    NASA Technical Reports Server (NTRS)

    Ennico, Kimberly; Bendek, Eduardo

    2015-01-01

    In December 2014, we were awarded Center Innovation Fund to evaluate an optical and mechanical concept for a novel implementation of a segmented telescope based on modular, interconnected small sats (satlets). The concept is called CAST, a Configurable Aperture Space Telescope. With a current TRL is 2 we will aim to reach TLR 3 in Sept 2015 by demonstrating a 2x2 mirror system to validate our optical model and error budget, provide straw man mechanical architecture and structural damping analyses, and derive future satlet-based observatory performance requirements. CAST provides an alternative access to visible and/or UV wavelength space telescope with 1-meter or larger aperture for NASA SMD Astrophysics and Planetary Science community after the retirement of HST

  19. Aperture center energy showcase

    SciTech Connect

    Torres, J. J.

    2012-03-01

    Sandia and Forest City have established a Cooperative Research and Development Agreement (CRADA), and the partnership provides a unique opportunity to take technology research and development from demonstration to application in a sustainable community. A project under that CRADA, Aperture Center Energy Showcase, offers a means to develop exhibits and demonstrations that present feedback to community members, Sandia customers, and visitors. The technologies included in the showcase focus on renewable energy and its efficiency, and resilience. These technologies are generally scalable, and provide secure, efficient solutions to energy production, delivery, and usage. In addition to establishing an Energy Showcase, support offices and conference capabilities that facilitate research, collaboration, and demonstration were created. The Aperture Center project focuses on establishing a location that provides outreach, awareness, and demonstration of research findings, emerging technologies, and project developments to Sandia customers, visitors, and Mesa del Sol community members.

  20. Integrated electrochromic aperture diaphragm

    NASA Astrophysics Data System (ADS)

    Deutschmann, T.; Oesterschulze, E.

    2014-05-01

    In the last years, the triumphal march of handheld electronics with integrated cameras has opened amazing fields for small high performing optical systems. For this purpose miniaturized iris apertures are of practical importance because they are essential to control both the dynamic range of the imaging system and the depth of focus. Therefore, we invented a micro optical iris based on an electrochromic (EC) material. This material changes its absorption in response to an applied voltage. A coaxial arrangement of annular rings of the EC material is used to establish an iris aperture without need of any mechanical moving parts. The advantages of this device do not only arise from the space-saving design with a thickness of the device layer of 50μm. But it also benefits from low power consumption. In fact, its transmission state is stable in an open circuit, phrased memory effect. Only changes of the absorption require a voltage of up to 2 V. In contrast to mechanical iris apertures the absorption may be controlled on an analog scale offering the opportunity for apodization. These properties make our device the ideal candidate for battery powered and space-saving systems. We present optical measurements concerning control of the transmitted intensity and depth of focus, and studies dealing with switching times, light scattering, and stability. While the EC polymer used in this study still has limitations concerning color and contrast, the presented device features all functions of an iris aperture. In contrast to conventional devices it offers some special features. Owing to the variable chemistry of the EC material, its spectral response may be adjusted to certain applications like color filtering in different spectral regimes (UV, optical range, infrared). Furthermore, all segments may be switched individually to establish functions like spatial Fourier filtering or lateral tunable intensity filters.

  1. Terahertz interferometric synthetic aperture tomography for confocal imaging systems.

    PubMed

    Heimbeck, M S; Marks, D L; Brady, D; Everitt, H O

    2012-04-15

    Terahertz (THz) interferometric synthetic aperture tomography (TISAT) for confocal imaging within extended objects is demonstrated by combining attributes of synthetic aperture radar and optical coherence tomography. Algorithms recently devised for interferometric synthetic aperture microscopy are adapted to account for the diffraction-and defocusing-induced spatially varying THz beam width characteristic of narrow depth of focus, high-resolution confocal imaging. A frequency-swept two-dimensional TISAT confocal imaging instrument rapidly achieves in-focus, diffraction-limited resolution over a depth 12 times larger than the instrument's depth of focus in a manner that may be easily extended to three dimensions and greater depths.

  2. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  3. Dynamic metamaterial aperture for microwave imaging

    SciTech Connect

    Sleasman, Timothy; Imani, Mohammadreza F.; Gollub, Jonah N.; Smith, David R.

    2015-11-16

    We present a dynamic metamaterial aperture for use in computational imaging schemes at microwave frequencies. The aperture consists of an array of complementary, resonant metamaterial elements patterned into the upper conductor of a microstrip line. Each metamaterial element contains two diodes connected to an external control circuit such that the resonance of the metamaterial element can be damped by application of a bias voltage. Through applying different voltages to the control circuit, select subsets of the elements can be switched on to create unique radiation patterns that illuminate the scene. Spatial information of an imaging domain can thus be encoded onto this set of radiation patterns, or measurements, which can be processed to reconstruct the targets in the scene using compressive sensing algorithms. We discuss the design and operation of a metamaterial imaging system and demonstrate reconstructed images with a 10:1 compression ratio. Dynamic metamaterial apertures can potentially be of benefit in microwave or millimeter wave systems such as those used in security screening and through-wall imaging. In addition, feature-specific or adaptive imaging can be facilitated through the use of the dynamic aperture.

  4. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  5. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  6. Aperture synthesis in space

    NASA Astrophysics Data System (ADS)

    Faucherre, Michel; Greenaway, A. H.; Merkle, F.; Noordam, J. E.; Perryman, M. A. C.

    1989-09-01

    The principles of optical aperture synthesis (OAS), which can yield images of much higher resolution than current ground observations, are essentially those of radio astronomy, and may be used in either space- or ground-based studies of the stellar envelopes around Be stars, the internal dynamics of active galaxies, etc. An account is presently given of possible OAS instrument configurations; it is shown that a large field of view can be achieved, so that the instrument may be calibrated on bright stars during the observation of faint sources. Mission concepts for a 'monostructure' OAS instrument of about 30-m size are examined.

  7. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  8. Differential Optical Synthetic Aperture Radar

    DOEpatents

    Stappaerts, Eddy A.

    2005-04-12

    A new differential technique for forming optical images using a synthetic aperture is introduced. This differential technique utilizes a single aperture to obtain unique (N) phases that can be processed to produce a synthetic aperture image at points along a trajectory. This is accomplished by dividing the aperture into two equal "subapertures", each having a width that is less than the actual aperture, along the direction of flight. As the platform flies along a given trajectory, a source illuminates objects and the two subapertures are configured to collect return signals. The techniques of the invention is designed to cancel common-mode errors, trajectory deviations from a straight line, and laser phase noise to provide the set of resultant (N) phases that can produce an image having a spatial resolution corresponding to a synthetic aperture.

  9. Coding Strategies for X-ray Tomography

    NASA Astrophysics Data System (ADS)

    Holmgren, Andrew

    This work focuses on the construction and application of coded apertures to compressive X-ray tomography. Coded apertures can be made in a number of ways, each method having an impact on system background and signal contrast. Methods of constructing coded apertures for structuring X-ray illumination and scatter are compared and analyzed. Apertures can create structured X-ray bundles that investigate specific sets of object voxels. The tailored bundles of rays form a code (or pattern) and are later estimated through computational inversion. Structured illumination can be used to subsample object voxels and make inversion feasible for low dose computed tomography (CT) systems, or it can be used to reduce background in limited angle CT systems. On the detection side, coded apertures modulate X-ray scatter signals to determine the position and radiance of scatter points. By forming object dependent projections in measurement space, coded apertures multiplex modulated scatter signals onto a detector. The multiplexed signals can be inverted with knowledge of the code pattern and system geometry. This work shows two systems capable of determining object position and type in a 2D plane, by illuminating objects with an X-ray `fan beam,' using coded apertures and compressive measurements. Scatter tomography can help identify materials in security and medicine that may be ambiguous with transmission tomography alone.

  10. Reading the Second Code: Mapping Epigenomes to Understand Plant Growth, Development, and Adaptation to the Environment[OA

    PubMed Central

    2012-01-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  11. Performance Analysis of MIMO-STBC Systems with Higher Coding Rate Using Adaptive Semiblind Channel Estimation Scheme

    PubMed Central

    Kumar, Ravi

    2014-01-01

    Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379

  12. Performance analysis of MIMO-STBC systems with higher coding rate using adaptive semiblind channel estimation scheme.

    PubMed

    Kumar, Ravi; Saxena, Rajiv

    2014-01-01

    Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel.

  13. Experimental demonstration of tri-aperture Differential Synthetic Aperture Ladar

    NASA Astrophysics Data System (ADS)

    Zhao, Zhilong; Huang, Jianyu; Wu, Shudong; Wang, Kunpeng; Bai, Tao; Dai, Ze; Kong, Xinyi; Wu, Jin

    2017-04-01

    A tri-aperture Differential Synthetic Aperture Ladar (DSAL) is demonstrated in laboratory, which is configured by using one common aperture to transmit the illuminating laser and another two along-track receiving apertures to collect back-scattered laser signal for optical heterodyne detection. The image formation theory on this tri-aperture DSAL shows that there are two possible methods to reconstruct the azimuth Phase History Data (PHD) for aperture synthesis by following standard DSAL principle, either method resulting in a different matched filter as well as an azimuth image resolution. The experimental setup of the tri-aperture DSAL adopts a frequency chirped laser of about 40 mW in 1550 nm wavelength range as the illuminating source and an optical isolator composed of a polarizing beam-splitter and a quarter wave plate to virtually line the three apertures in the along-track direction. Various DSAL images up to target distance of 12.9 m are demonstrated using both PHD reconstructing methods.

  14. Along Track Interferometry Synthetic Aperture Radar (ATI-SAR) Techniques for Ground Moving Target Detection

    DTIC Science & Technology

    2006-01-01

    DISTRIBUTION CODE 13. ABSTRACT (Maximum 200 Words) Conventional along track interferometric synthetic aperature radar , ATI-SAR, approaches can detect...House, Inc., Norwood, MA, 1995. [14] R. Bamler and P. Hartl, " Synthetic aperture radar interferometry," Inverse Problems, vol. 14, R1-R54, 1998. [15... SYNTHETIC APERTURE RADAR (ATI-SAR) TECHNIQUES FOR GROUND MOVING TARGET DETECTION Stiefvater Consultants

  15. Holographically Correcting Synthetic Aperture Aberrations.

    DTIC Science & Technology

    1987-12-01

    Malacara (20:105-148). The synthetic aperture was aligned in accordance with the synthetic-aperture alignment technique of Gill (8:61-64). The...1987. 20. Malacara , Daniel, ed. Optical Shop Testing. New York: John Wiley & Sons, 1978. 21. Marciniak, Capt Michael. Tutorial Presentation of mV

  16. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    NASA Astrophysics Data System (ADS)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  17. Optica aperture synthesis

    NASA Astrophysics Data System (ADS)

    van der Avoort, Casper

    2006-05-01

    Optical long baseline stellar interferometry is an observational technique in astronomy that already exists for over a century, but is truly blooming during the last decades. The undoubted value of stellar interferometry as a technique to measure stellar parameters beyond the classical resolution limit is more and more spreading to the regime of synthesis imaging. With optical aperture synthesis imaging, the measurement of parameters is extended to the reconstruction of high resolution stellar images. A number of optical telescope arrays for synthesis imaging are operational on Earth, while space-based telescope arrays are being designed. For all imaging arrays, the combination of the light collected by the telescopes in the array can be performed in a number of ways. In this thesis, methods are introduced to model these methods of beam combination and compare their effectiveness in the generation of data to be used to reconstruct the image of a stellar object. One of these methods of beam combination is to be applied in a future space telescope. The European Space Agency is developing a mission that can valuably be extended with an imaging beam combiner. This mission is labeled Darwin, as its main goal is to provide information on the origin of life. The primary objective is the detection of planets around nearby stars - called exoplanets- and more precisely, Earth-like exoplanets. This detection is based on a signal, rather than an image. With an imaging mode, designed as described in this thesis, Darwin can make images of, for example, the planetary system to which the detected exoplanet belongs or, as another example, of the dust disk around a star out of which planets form. Such images will greatly contribute to the understanding of the formation of our own planetary system and of how and when life became possible on Earth. The comparison of beam combination methods for interferometric imaging occupies most of the pages of this thesis. Additional chapters will

  18. Material Measurements Using Groundplane Apertures

    NASA Technical Reports Server (NTRS)

    Komisarek, K.; Dominek, A.; Wang, N.

    1995-01-01

    A technique for material parameter determination using an aperture in a groundplane is studied. The material parameters are found by relating the measured reflected field in the aperture to a numerical model. Two apertures are studied which can have a variety of different material configurations covering the aperture. The aperture cross-sections studied are rectangular and coaxial. The material configurations involved combinations of single layer and dual layers with or without a resistive exterior resistive sheet. The resistivity of the resistive sheet can be specified to simulate a perfect electric conductor (PEC) backing (0 Ohms/square) to a free space backing (infinity Ohms/square). Numerical parameter studies and measurements were performed to assess the feasibility of the technique.

  19. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  20. HASEonGPU-An adaptive, load-balanced MPI/GPU-code for calculating the amplified spontaneous emission in high power laser media

    NASA Astrophysics Data System (ADS)

    Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.

    2016-10-01

    We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.

  1. 47 CFR 25.134 - Licensing provisions of Very Small Aperture Terminal (VSAT) and C-band Small Aperture Terminal...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... a VSAT network using frequency division multiple access (FDMA) or time division multiple access (TDMA) technique, N is equal to one. For a VSAT network using code division multiple access (CDMA... Terminal (VSAT) and C-band Small Aperture Terminal (CSAT) networks. 25.134 Section 25.134...

  2. 47 CFR 25.134 - Licensing provisions of Very Small Aperture Terminal (VSAT) and C-band Small Aperture Terminal...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... a VSAT network using frequency division multiple access (FDMA) or time division multiple access (TDMA) technique, N is equal to one. For a VSAT network using code division multiple access (CDMA... Terminal (VSAT) and C-band Small Aperture Terminal (CSAT) networks. 25.134 Section 25.134...

  3. 47 CFR 25.134 - Licensing provisions of Very Small Aperture Terminal (VSAT) and C-band Small Aperture Terminal...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... a VSAT network using frequency division multiple access (FDMA) or time division multiple access (TDMA) technique, N is equal to one. For a VSAT network using code division multiple access (CDMA... Terminal (VSAT) and C-band Small Aperture Terminal (CSAT) networks. 25.134 Section 25.134...

  4. 47 CFR 25.134 - Licensing provisions for Very Small Aperture Terminal (VSAT) and C-band Small Aperture Terminal...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... a VSAT network using frequency division multiple access (FDMA) or time division multiple access (TDMA) technique, N is equal to one. For a VSAT network using code division multiple access (CDMA... Terminal (VSAT) and C-band Small Aperture Terminal (CSAT) networks. 25.134 Section 25.134...

  5. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  6. Synthetic aperture hitchhiker imaging.

    PubMed

    Yarman, Can Evren; Yazici, Birsen

    2008-11-01

    We introduce a novel synthetic-aperture imaging method for radar systems that rely on sources of opportunity. We consider receivers that fly along arbitrary, but known, flight trajectories and develop a spatio-temporal correlation-based filtered-backprojection-type image reconstruction method. The method involves first correlating the measurements from two different receiver locations. This leads to a forward model where the radiance of the target scene is projected onto the intersection of certain hyperboloids with the surface topography. We next use microlocal techniques to develop a filtered-backprojection-type inversion method to recover the scene radiance. The method is applicable to both stationary and mobile, and cooperative and noncooperative sources of opportunity. Additionally, it is applicable to nonideal imaging scenarios such as those involving arbitrary flight trajectories, and has the desirable property of preserving the visible edges of the scene radiance. We present an analysis of the computational complexity of the image reconstruction method and demonstrate its performance in numerical simulations for single and multiple transmitters of opportunity.

  7. Sparse aperture endoscope

    DOEpatents

    Fitch, Joseph P.

    1999-07-06

    An endoscope which reduces the volume needed by the imaging part thereof, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases the utility thereof. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing.

  8. Sparse aperture endoscope

    DOEpatents

    Fitch, J.P.

    1999-07-06

    An endoscope is disclosed which reduces the volume needed by the imaging part, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases it's utility. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing. 7 figs.

  9. Aperture masking interferometry research simulation

    NASA Astrophysics Data System (ADS)

    Wang, Haitao; Luo, Qiufeng; Fan, Weijun; Zhang, Xian Ling; Tao, Chunkan; Zhu, Yongtian; Zhou, Bifang; Chen, Hanliang

    2004-10-01

    Aperture Masking Interferometry (AMI) is one of the high-resolution astronomical image observation technologies. It is also an important research way to the Optical Aperture Synthesis (OAS). The theory of OAS is simply introduced and AMI simulation method is raised. The mathematics model is built and the interferogram fringes are got. The aperture mask u-v coverage is discussed and one image reconstruction method is done. The reconstructed image result is got with CLEAN method. Shortcoming of this work is also referred and the future research work is mentioned at last.

  10. FESDIF -- Finite Element Scalar Diffraction theory code

    SciTech Connect

    Kraus, H.G.

    1992-09-01

    This document describes the theory and use of a powerful scalar diffraction theory based computer code for calculation of intensity fields due to diffraction of optical waves by two-dimensional planar apertures and lenses. This code is called FESDIF (Finite Element Scalar Diffraction). It is based upon both Fraunhofer and Kirchhoff scalar diffraction theories. Simplified routines for circular apertures are included. However, the real power of the code comes from its basis in finite element methods. These methods allow the diffracting aperture to be virtually any geometric shape, including the various secondary aperture obstructions present in telescope systems. Aperture functions, with virtually any phase and amplitude variations, are allowed in the aperture openings. Step change aperture functions are accommodated. The incident waves are considered to be monochromatic. Plane waves, spherical waves, or Gaussian laser beams may be incident upon the apertures. Both area and line integral transformations were developed for the finite element based diffraction transformations. There is some loss of aperture function generality in the line integral transformations which are typically many times more computationally efficient than the area integral transformations when applicable to a particular problem.

  11. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    SciTech Connect

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  12. P1 adaptation of TRIPOLI-4® code for the use of 3D realistic core multigroup cross section generation

    NASA Astrophysics Data System (ADS)

    Cai, Li; Pénéliau, Yannick; Diop, Cheikh M.; Malvagi, Fausto

    2014-06-01

    In this paper, we discuss some improvements we recently implemented in the Monte-Carlo code TRIPOLI-4® associated with the homogenization and collapsing of subassemblies cross sections. The improvement offered us another approach to get critical multigroup cross sections with Monte-Carlo method. The new calculation method in TRIPOLI-4® tries to ensure the neutronic balances, the multiplicative factors and the critical flux spectra for some realistic geometries. We make it by at first improving the treatment of the energy transfer probability, the neutron excess weight and the neutron fission spectrum. This step is necessary for infinite geometries. The second step which will be enlarged in this paper is aimed at better dealing with the multigroup anisotropy distribution law for finite geometries. Usually, Monte-Carlo homogenized multi-group cross sections are validated within a core calculation by a deterministic code. Here, the validation of multigroup constants will also be carried out by Monte-Carlo core calculation code. Different subassemblies are tested with the new collapsing method, especially for the fast neutron reactors subassemblies.

  13. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells.

    PubMed

    Filomatori, Claudia V; Carballeda, Juan M; Villordo, Sergio M; Aguirre, Sebastian; Pallarés, Horacio M; Maestre, Ana M; Sánchez-Vargas, Irma; Blair, Carol D; Fabri, Cintia; Morales, Maria A; Fernandez-Sesma, Ana; Gamarnik, Andrea V

    2017-03-01

    The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3'UTRs (known as sfRNAs), relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3'UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3'UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of flavivirus 3'UTRs with

  14. Microelectrofluidic iris for variable aperture

    NASA Astrophysics Data System (ADS)

    Chang, Jong-hyeon; Jung, Kyu-Dong; Lee, Eunsung; Choi, Minseog; Lee, Seungwan

    2012-03-01

    This paper presents a variable aperture design based on the microelectrofluidic technology which integrates electrowetting and microfluidics. The proposed microelectrofluidic iris (MEFI) consists of two immiscible fluids and two connected surface channels formed by three transparent plates and two spacers between them. In the initial state, the confined aqueous ring makes two fluidic interfaces, on which the Laplace pressure is same, in the hydrophobic surface channels. When a certain voltage is applied between the dielectric-coated control electrode beneath the three-phase contact line (TCL) and the reference electrode for grounding the aqueous, the contact angle changes on the activated control electrode. At high voltage over the threshold, the induced positive pressure difference makes the TCLs on the 1st channel advance to the center and the aperture narrow. If there is no potential difference between the control and reference electrodes, the pressure difference becomes negative. It makes the TCLs on the 1st channel recede and the aperture widen to the initial state. It is expected that the proposed MEFI is able to be widely used because of its fast response, circular aperture, digital operation, high aperture ratio, and possibility to be miniaturized for variable aperture.

  15. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  16. Fabrication of the pinhole aperture for AdaptiSPECT

    PubMed Central

    Kovalsky, Stephen; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical pinhole SPECT imaging system under final construction at the Center for Gamma-Ray Imaging. The system is designed to be able to autonomously change its imaging configuration. The system comprises 16 detectors mounted on translational stages to move radially away and towards the center of the field-of-view. The system also possesses an adaptive pinhole aperture with multiple collimator diameters and pinhole sizes, as well as the possibility to switch between multiplexed and non-multiplexed imaging configurations. In this paper, we describe the fabrication of the AdaptiSPECT pinhole aperture and its controllers. PMID:26146443

  17. Computational study of ion beam extraction phenomena through multiple apertures

    SciTech Connect

    Hu, Wanpeng; Sang, Chaofeng; Tang, Tengfei; Wang, Dezhen; Li, Ming; Jin, Dazhi; Tan, Xiaohua

    2014-03-15

    The process of ion extraction through multiple apertures is investigated using a two-dimensional particle-in-cell code. We consider apertures with a fixed diameter with a hydrogen plasma background, and the trajectories of electrons, H{sup +} and H{sub 2}{sup +} ions in the self-consistently calculated electric field are traced. The focus of this work is the fundamental physics of the ion extraction, and not particular to a specific device. The computed convergence and divergence of the extracted ion beam are analyzed. We find that the extracted ion flux reaching the extraction electrode is non-uniform, and the peak flux positions change according to operational parameters, and do not necessarily match the positions of the apertures in the y-direction. The profile of the ion flux reaching the electrode is mainly affected by the bias voltage and the distance between grid wall and extraction electrode.

  18. Antenna aperture and imaging resolution of synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Liu, Liren

    2009-08-01

    In this paper, the azimuth imaging resolutions of synthetic aperture imaging ladar (SAIL) using the antenna telescopes with a circular aperture for reception and a circular plan or a Gaussian beam for transmitting and with a rectangular aperture for reception and a rectangular plane or an elliptic Gaussian beam for transmitting are investigated. The analytic expressions of impulse response for imaging are achieved. The ideal azimuth spot of resolution and its degradation due to the target deviation from the footprint center, the mismatch from the quadratic phase matched filtering, the finite sampling rate and width are discussed. And the range resolution is also studied. Mathematical criteria are all given. As a conclusion, the telescope of rectangular aperture can provide a rectangular footprint more suitable for the SAIL scanning format, and an optimal design of aperture is thus possible for both a high resolution and a wide scan strip. Moreover, an explanation to the resulted azimuth resolution from our laboratory-scaled SAIL is given to verify the developed theory.

  19. Interferometric aperture synthesis for next generation passive millimetre wave imagers

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.; Wilkinson, Peter; Taylor, Chris

    2012-10-01

    This paper discusses the phase effects in the near-field associated with aperture synthesis imaging. The results explain why in some regions of the near-field it is possible to use Fourier transform techniques on a visibility function to create images. However, to generate images deep inside the near-field alternative processing techniques such as the G-matrix method are required. Algorithms based on this technique are used to process imagery from a proof of concept 22 GHz aperture synthesis imager [1]. Techniques for generating synthetic cross-correlations for the aperture synthesis technique are introduced and these are then validated using the image creation algorithms and real data from the proof of concept imager. Using these data the phenomenon of aliasing is explored. The simulation code is then used to illustrate how the effects of aliasing may be minimised by randomising the locations of the antennas over the aperture. The simulation tool is used to show how in the near field the technique can provide a range resolution in 3D imaging of a couple of millimetres when operating with a wavelength of 13 mm. Moving to illustrate the quality of images generated by a next generation aperture synthesis imagers, the software is extended to systems with hundreds of receiver channels.

  20. Design of NRAs having higher aperture opening ratio and autocorrelation compression ratio by means of a global optimization method

    NASA Astrophysics Data System (ADS)

    Lang, Haitao; Liu, Liren; Yang, Qingguo

    2007-10-01

    When noises considerations are made, nonredundant arrays (NRAs) are endowed with many advantages which other arrays e.g., uniformly redundant arrays (URAs) do not possess in applications of coded aperture imaging. However, lower aperture opening ratio limits the applications of NRA in practice. In this paper, we present a computer searching method based on a global optimization algorithm named DIRECT to design NRAs. Compared with the existing NRAs e.g., Golay's NRAs, which are well known and widely used in various applications, NRAs found by our method have higher aperture opening ratio and auto correlation compression ratio. These advantages make our aperture arrays be very useful for practical applications especially for which of aperture size are limited. Here, we also present some aperture arrays we found. These aperture arrays have an interesting property that they belong to both NRA and URA.

  1. Fast coeff_token decoding method and new memory architecture design for an efficient H.264/AVC context-based adaptive variable length coding decoder

    NASA Astrophysics Data System (ADS)

    Moon, Yong Ho; Yoon, Kun Su; Ha, Seok Wun

    2009-12-01

    A fast coeff_token decoding method based on new memory architecture is proposed to implement an efficient context-based adaptive variable length-coding (CAVLC) decoder. The heavy memory access needed in CAVLC decoding is a significant issue in designing a real system, such as digital multimedia broadcasting players, portable media players, and mobile phones with video, because it results in high power consumption and delay in operations. Recently, a new coeff_token variable-length decoding method has been suggested to achieve memory access reduction. However, it still requires a large portion of the total memory access in CAVLC decoding. In this work, an effective memory architecture is designed through careful examination of codewords in variable-length code tables. In addition, a novel fast decoding method is proposed to further reduce the memory accesses required for reconstructing the coeff_token element. Only one memory access is used for reconstructing each coeff_token element in the proposed method.

  2. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology.

    PubMed

    Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion

    2013-08-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign.

  3. Automation and adaptation: Nurses’ problem-solving behavior following the implementation of bar coded medication administration technology

    PubMed Central

    Holden, Richard J.; Rivera-Rodriguez, A. Joy; Faye, Héléne; Scanlon, Matthew C.; Karsh, Ben-Tzion

    2012-01-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses’ operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA’s impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians’ work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  4. Code Development of Three-Dimensional General Relativistic Hydrodynamics with AMR (Adaptive-Mesh Refinement) and Results from Special and General Relativistic Hydrodynamics

    NASA Astrophysics Data System (ADS)

    Dönmez, Orhan

    2004-09-01

    In this paper, the general procedure to solve the general relativistic hydrodynamical (GRH) equations with adaptive-mesh refinement (AMR) is presented. In order to achieve, the GRH equations are written in the conservation form to exploit their hyperbolic character. The numerical solutions of GRH equations are obtained by high resolution shock Capturing schemes (HRSC), specifically designed to solve nonlinear hyperbolic systems of conservation laws. These schemes depend on the characteristic information of the system. The Marquina fluxes with MUSCL left and right states are used to solve GRH equations. First, different test problems with uniform and AMR grids on the special relativistic hydrodynamics equations are carried out to verify the second-order convergence of the code in one, two and three dimensions. Results from uniform and AMR grid are compared. It is found that adaptive grid does a better job when the number of resolution is increased. Second, the GRH equations are tested using two different test problems which are Geodesic flow and Circular motion of particle In order to do this, the flux part of GRH equations is coupled with source part using Strang splitting. The coupling of the GRH equations is carried out in a treatment which gives second order accurate solutions in space and time.

  5. Dengue virus genomic variation associated with mosquito adaptation defines the pattern of viral non-coding RNAs and fitness in human cells

    PubMed Central

    Aguirre, Sebastian; Pallarés, Horacio M.; Blair, Carol D.; Fabri, Cintia; Morales, Maria A.; Fernandez-Sesma, Ana; Gamarnik, Andrea V.

    2017-01-01

    The Flavivirus genus includes a large number of medically relevant pathogens that cycle between humans and arthropods. This host alternation imposes a selective pressure on the viral population. Here, we found that dengue virus, the most important viral human pathogen transmitted by insects, evolved a mechanism to differentially regulate the production of viral non-coding RNAs in mosquitos and humans, with a significant impact on viral fitness in each host. Flavivirus infections accumulate non-coding RNAs derived from the viral 3’UTRs (known as sfRNAs), relevant in viral pathogenesis and immune evasion. We found that dengue virus host adaptation leads to the accumulation of different species of sfRNAs in vertebrate and invertebrate cells. This process does not depend on differences in the host machinery; but it was found to be dependent on the selection of specific mutations in the viral 3’UTR. Dissecting the viral population and studying phenotypes of cloned variants, the molecular determinants for the switch in the sfRNA pattern during host change were mapped to a single RNA structure. Point mutations selected in mosquito cells were sufficient to change the pattern of sfRNAs, induce higher type I interferon responses and reduce viral fitness in human cells, explaining the rapid clearance of certain viral variants after host change. In addition, using epidemic and pre-epidemic Zika viruses, similar patterns of sfRNAs were observed in mosquito and human infected cells, but they were different from those observed during dengue virus infections, indicating that distinct selective pressures act on the 3’UTR of these closely related viruses. In summary, we present a novel mechanism by which dengue virus evolved an RNA structure that is under strong selective pressure in the two hosts, as regulator of non-coding RNA accumulation and viral fitness. This work provides new ideas about the impact of host adaptation on the variability and evolution of flavivirus 3

  6. Apodized apertures for solar coronagraphy

    NASA Astrophysics Data System (ADS)

    Aime, C.

    2007-05-01

    Aims:We propose the principle of a new solar telescope that makes it possible to observe the solar corona very close to the solar limb, without the help of a Lyot coronagraph. The result is obtained using a strongly apodized aperture. Methods: We obtain the theoretical form of the diffraction halo produced by the solar disk at the level of the corona for a perfect diffraction-limited telescope, for raw and apodized apertures. The problem is first solved at one dimension for which a complete set of analytical expressions can be derived, including the effect of the center-to-limb solar variation. Formal equations are written for the two-dimensional case, and it is shown that the expression may take the form of a 1D integral. Nevertheless, the problem is difficult to solve. An analytic expression can be worked out using the line spread function, which is shown to give a valid approximation of the problem, in excellent agreement with a numerical computation that uses the exact integral. Results: We show for the raw aperture that the diffraction halo is very strong and decreases slowly as ρ-1. We propose as a solution to this problem an apodized aperture based on the generalized prolate spheroidal functions (GPSF). Such an apodized aperture may reduce the diffraction halo enough to permit a direct observation of the solar corona very close to the solar limb. A signal-to-noise ratio analysis is given. Conclusions: Different strengths of apodization may be used, but very strong apodizations are indeed mandatory. A good choice seems to be a GPSF aperture with the prolate coefficient c on the order of 10. It could reduce the halo of diffraction by a factor 105 (at the cost of an intensity throughput of 10% and a reduction in the classical resolution by a factor of about 1.6) and permit observation of the corona very close to the solar limb.

  7. Adaptive coding of the value of social cues with oxytocin, an fMRI study in autism spectrum disorder.

    PubMed

    Andari, Elissar; Richard, Nathalie; Leboyer, Marion; Sirigu, Angela

    2016-03-01

    The neuropeptide oxytocin (OT) is one of the major targets of research in neuroscience, with respect to social functioning. Oxytocin promotes social skills and improves the quality of face processing in individuals with social dysfunctions such as autism spectrum disorder (ASD). Although one of OT's key functions is to promote social behavior during dynamic social interactions, the neural correlates of this function remain unknown. Here, we combined acute intranasal OT (IN-OT) administration (24 IU) and fMRI with an interactive ball game and a face-matching task in individuals with ASD (N = 20). We found that IN-OT selectively enhanced the brain activity of early visual areas in response to faces as compared to non-social stimuli. OT inhalation modulated the BOLD activity of amygdala and hippocampus in a context-dependent manner. Interestingly, IN-OT intake enhanced the activity of mid-orbitofrontal cortex in response to a fair partner, and insula region in response to an unfair partner. These OT-induced neural responses were accompanied by behavioral improvements in terms of allocating appropriate feelings of trust toward different partners' profiles. Our findings suggest that OT impacts the brain activity of key areas implicated in attention and emotion regulation in an adaptive manner, based on the value of social cues.

  8. Synthetic Aperture Radar Oceanographic Investigations.

    DTIC Science & Technology

    1987-03-01

    Shuchman, P.G. Teleki, S.V. Hsiao, O.H. Shemdin , and W.E. Brown, Synthetic Aperture Radar Imaging of Ocean Waves : Comparison with Wave Measurements, J... Shemdin , Synthetic Aperture Radar Imaging of Ocean Waves during the Marineland Experiment, IEEE J. Oceanic Eg., OE-8, pp. 83-90, 1983. 12. R.A...If the surface reflectivity is assumed to be spatially un- section. are computed from the wave height spectrum as correlated, i.e. follows . (x. Y. t

  9. Large aperture diffractive space telescope

    DOEpatents

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  10. SEASAT Synthetic Aperture Radar Data

    NASA Technical Reports Server (NTRS)

    Henderson, F. M.

    1981-01-01

    The potential of radar imagery from space altitudes is discussed and the advantages of radar over passive sensor systems are outlined. Specific reference is made to the SEASAT synthetic aperture radar. Possible applications include oil spill monitoring, snow and ice reconnaissance, mineral exploration, and monitoring phenomena in the urban environment.

  11. Future of synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Barath, F. T.

    1978-01-01

    The present status of the applications of Synthetic Aperture Radars (SARs) is reviewed, and the technology state-of-the art as represented by the Seasat-A and SIR-A SARs examined. The potential of SAR applications, and the near- and longer-term technology trends are assessed.

  12. Regulatory versus coding signatures of natural selection in a candidate gene involved in the adaptive divergence of whitefish species pairs (Coregonus spp.).

    PubMed

    Jeukens, Julie; Bernatchez, Louis

    2012-01-01

    While gene expression divergence is known to be involved in adaptive phenotypic divergence and speciation, the relative importance of regulatory and structural evolution of genes is poorly understood. A recent next-generation sequencing experiment allowed identifying candidate genes potentially involved in the ongoing speciation of sympatric dwarf and normal lake whitefish (Coregonus clupeaformis), such as cytosolic malate dehydrogenase (MDH1), which showed both significant expression and sequence divergence. The main goal of this study was to investigate into more details the signatures of natural selection in the regulatory and coding sequences of MDH1 in lake whitefish and test for parallelism of these signatures with other coregonine species. Sequencing of the two regions in 118 fish from four sympatric pairs of whitefish and two cisco species revealed a total of 35 single nucleotide polymorphisms (SNPs), with more genetic diversity in European compared to North American coregonine species. While the coding region was found to be under purifying selection, an SNP in the proximal promoter exhibited significant allele frequency divergence in a parallel manner among independent sympatric pairs of North American lake whitefish and European whitefish (C. lavaretus). According to transcription factor binding simulation for 22 regulatory haplotypes of MDH1, putative binding profiles were fairly conserved among species, except for the region around this SNP. Moreover, we found evidence for the role of this SNP in the regulation of MDH1 expression level. Overall, these results provide further evidence for the role of natural selection in gene regulation evolution among whitefish species pairs and suggest its possible link with patterns of phenotypic diversity observed in coregonine species.

  13. Synthetic Aperture Sonar Low Frequency vs. High Frequency Automatic Contact Generation

    DTIC Science & Technology

    2010-06-01

    resurveyed the harbor with both sidescan sonar (on REMUS) and SAS (on the SSAM AUV) provided by NAVSEA Costal Systems Command. NOMWC, NAVOCEANO and...Synthetic Aperture Sonar Low Frequency vs. High Frequency Automatic Contact Generation J. R. Dubberley and M. L. Gendron Naval Research...Laboratory Code 7440.1 Building 1005 Stennis Space Center, MS 39529 USA Abstract- Synthetic Aperture Sonar (SAS) bottom mapping sensors are on the

  14. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  15. Visual Adaptation

    PubMed Central

    Webster, Michael A.

    2015-01-01

    Sensory systems continuously mold themselves to the widely varying contexts in which they must operate. Studies of these adaptations have played a long and central role in vision science. In part this is because the specific adaptations remain a powerful tool for dissecting vision, by exposing the mechanisms that are adapting. That is, “if it adapts, it's there.” Many insights about vision have come from using adaptation in this way, as a method. A second important trend has been the realization that the processes of adaptation are themselves essential to how vision works, and thus are likely to operate at all levels. That is, “if it's there, it adapts.” This has focused interest on the mechanisms of adaptation as the target rather than the probe. Together both approaches have led to an emerging insight of adaptation as a fundamental and ubiquitous coding strategy impacting all aspects of how we see. PMID:26858985

  16. Locomotor behaviour of children while navigating through apertures.

    PubMed

    Wilmut, Kate; Barnett, Anna L

    2011-04-01

    During everyday locomotion, we encounter a range of obstacles requiring specific motor responses; a narrow aperture which forces us to rotate our shoulders in order to pass through is one example. In adults, the decision to rotate their shoulders is body scaled (Warren and Whang in J Exp Psychol Hum Percept Perform 13:371-383, 1987), and the movement through is temporally and spatially tailored to the aperture size (Higuchi et al. in Exp Brain Res 175:50-59, 2006; Wilmut and Barnett in Hum Mov Sci 29:289-298, 2010). The aim of the current study was to determine how 8-to 10-year-old children make action judgements and movement adaptations while passing through a series of five aperture sizes which were scaled to body size (0.9, 1.1, 1.3, 1.5 and 1.7 times shoulder width). Spatial and temporal characteristics of movement speed and shoulder rotation were collected over the initial approach phase and while crossing the doorway threshold. In terms of making action judgements, results suggest that the decision to rotate the shoulders is not scaled in the same way as adults, with children showing a critical ratio of 1.61. Shoulder angle at the door could be predicted, for larger aperture ratios, by both shoulder angle variability and lateral trunk variability. This finding supports the dynamical scaling model (Snapp-Childs and Bingham in Exp Brain Res 198:527-533, 2009). In terms of movement adaptations, we have shown that children, like adults, spatially and temporally tailor their movements to aperture size.

  17. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  18. Integron-associated mobile gene cassettes code for folded proteins: the structure of Bal32a, a new member of the adaptable alpha+beta barrel family.

    PubMed

    Robinson, Andrew; Wu, Peter S-C; Harrop, Stephen J; Schaeffer, Patrick M; Dosztányi, Zsuzsanna; Gillings, Michael R; Holmes, Andrew J; Nevalainen, K M Helena; Stokes, H W; Otting, Gottfried; Dixon, Nicholas E; Curmi, Paul M G; Mabbutt, Bridget C

    2005-03-11

    The wide-ranging physiology and large genetic variability observed for prokaryotes is largely attributed, not to the prokaryotic genome itself, but rather to mechanisms of lateral gene transfer. Cassette PCR has been used to sample the integron/gene cassette metagenome from different natural environments without laboratory cultivation of the host organism, and without prior knowledge of any target protein sequence. Since over 90% of cassette genes are unrelated to any sequence in the current databases, it is not clear whether these genes code for folded functional proteins. We have selected a sample of eight cassette-encoded genes with no known homologs; five have been isolated as soluble protein products and shown by biophysical techniques to be folded. In solution, at least three of these proteins organise as stable oligomeric assemblies. The tertiary structure of one of these, Bal32a derived from a contaminated soil site, has been solved by X-ray crystallography to 1.8 A resolution. From the three-dimensional structure, Bal32a is found to be a member of the highly adaptable alpha+beta barrel family of transport proteins and enzymes. In Bal32a, the barrel cavity is unusually deep and inaccessible to solvent. Polar side-chains in its interior are reminiscent of catalytic sites of limonene-1,2-epoxide hydrolase and nogalonic acid methyl ester cyclase. These studies demonstrate the viability of direct sampling of mobile DNA as a route for the discovery of novel proteins.

  19. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  20. Broadband synthetic aperture geoacoustic inversion.

    PubMed

    Tan, Bien Aik; Gerstoft, Peter; Yardim, Caglar; Hodgkiss, William S

    2013-07-01

    A typical geoacoustic inversion procedure involves powerful source transmissions received on a large-aperture receiver array. A more practical approach is to use a single moving source and/or receiver in a low signal to noise ratio (SNR) setting. This paper uses single-receiver, broadband, frequency coherent matched-field inversion and exploits coherently repeated transmissions to improve estimation of the geoacoustic parameters. The long observation time creates a synthetic aperture due to relative source-receiver motion. This approach is illustrated by studying the transmission of multiple linear frequency modulated (LFM) pulses which results in a multi-tonal comb spectrum that is Doppler sensitive. To correlate well with the measured field across a receiver trajectory and to incorporate transmission from a source trajectory, waveguide Doppler and normal mode theory is applied. The method is demonstrated with low SNR, 100-900 Hz LFM pulse data from the Shallow Water 2006 experiment.

  1. VSATs - Very small aperture terminals

    NASA Astrophysics Data System (ADS)

    Everett, John L.

    The present volume on very small aperture terminals (VSATs) discusses antennas, semiconductor devices, and traveling wave tubes and amplifiers for VSAT systems, VSAT low noise downconverters, and modems and codecs for VSAT systems. Attention is given to multiaccess protocols for VSAT networks, protocol software in Ku-band VSAT network systems, system design of VSAT data networks, and the policing of VSAT networks. Topics addressed include the PANDATA and PolyCom systems, APOLLO - a satellite-based information distribution system, data broadcasting within a satellite television channel, and the NEC NEXTAR VSAT system. Also discussed are small aperture military ground terminals, link budgets for VSAT systems, capabilities and experience of a VSAT service provider, and developments in VSAT regulation.

  2. Large Aperture Scintillometer Intercomparison Study

    NASA Astrophysics Data System (ADS)

    Kleissl, J.; Gomez, J.; Hong, S.-H.; Hendrickx, J. M. H.; Rahn, T.; Defoor, W. L.

    2008-07-01

    Two field studies with six large aperture scintillometers (LASs) were performed using horizontal and slant paths. The accuracy of this novel and increasingly popular technique for measuring sensible heat fluxes was quantified by comparing measurements from different instruments over nearly identical transects. Random errors in LAS measurements were small, since correlation coefficients between adjacent measurements were greater than 0.995. However, for an ideal set-up differences in linear regression slopes of up to 21% were observed with typical inter-instrument differences of 6%. Differences of 10% are typical in more realistic measurement scenarios over homogeneous natural vegetation and different transect heights and locations. Inaccuracies in the optics, which affect the effective aperture diameter, are the most likely explanation for the observed differences.

  3. Synthetic Aperture Radar Simulation Study

    DTIC Science & Technology

    1984-03-01

    multilook are discussed. A chapter is devoted to elevation and planimetric data bases. In addition, six- teen pictures of SAR images from Hughes Aircraft, as...scans. Figure 5.4-1 is a photograph ot two SAR displays. The tirst display is made up ot six subscans and has a multilook ot one. Note that tading is...dentfi by block number) * Synthetic Aperture Radar ( SAR ) Simulation Study Radar Simulation Data Bases 5/~t. 4th.- Computer Image Generation Display 20

  4. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  5. 3D synthetic aperture for controlled-source electromagnetics

    NASA Astrophysics Data System (ADS)

    Knaak, Allison

    Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets

  6. Controlled-aperture wave-equation migration

    SciTech Connect

    Huang, L.; Fehler, Michael C.; Sun, H.; Li, Z.

    2003-01-01

    We present a controlled-aperture wave-equation migration method that no1 only can reduce migration artiracts due to limited recording aperlurcs and determine image weights to balance the efl'ects of limited-aperture illumination, but also can improve thc migration accuracy by reducing the slowness perturbations within thc controlled migration regions. The method consists of two steps: migration aperture scan and controlled-aperture migration. Migration apertures for a sparse distribution of shots arc determined using wave-equation migration, and those for the other shots are obtained by interpolation. During the final controlled-aperture niigration step, we can select a reference slowness in c;ontrollecl regions of the slowness model to reduce slowncss perturbations, and consequently increase the accuracy of wave-equation migration inel hods that makc use of reference slownesses. In addition, the computation in the space domain during wavefield downward continuation is needed to be conducted only within the controlled apertures and therefore, the computational cost of controlled-aperture migration step (without including migration aperture scan) is less than the corresponding uncontrolled-aperture migration. Finally, we can use the efficient split-step Fourier approach for migration-aperture scan, then use other, more accurate though more expensive, wave-equation migration methods to perform thc final controlled-apertio.ee migration to produce the most accurate image.

  7. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water.

  8. An adaptive-in-temperature method for on-the-fly sampling of thermal neutron scattering data in continuous-energy Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pavlou, Andrew Theodore

    The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data

  9. Dual aperture multispectral Schmidt objective

    NASA Astrophysics Data System (ADS)

    Minott, P. O.

    1984-04-01

    A dual aperture, off-axis catadioptic Schmidt objective is described. It is formed by symmetrically aligning two pairs of Schmidt objectives on opposite sides of a common plane (x,z). Each objective has a spherical primary mirror with a spherical focal plane and center of curvature aligned along an optic axis laterally spaced apart from the common plane. A multiprism beamsplitter with buried dichroic layers and a convex entrance and concave exit surfaces optically concentric to the center of curvature may be positioned at the focal plane. The primary mirrors of each objective may be connected rigidly together and may have equal or unequal focal lengths.

  10. Dual aperture multispectral Schmidt objective

    NASA Technical Reports Server (NTRS)

    Minott, P. O. (Inventor)

    1984-01-01

    A dual aperture, off-axis catadioptic Schmidt objective is described. It is formed by symmetrically aligning two pairs of Schmidt objectives on opposite sides of a common plane (x,z). Each objective has a spherical primary mirror with a spherical focal plane and center of curvature aligned along an optic axis laterally spaced apart from the common plane. A multiprism beamsplitter with buried dichroic layers and a convex entrance and concave exit surfaces optically concentric to the center of curvature may be positioned at the focal plane. The primary mirrors of each objective may be connected rigidly together and may have equal or unequal focal lengths.

  11. the Large Aperture GRB Observatory

    SciTech Connect

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  12. Aperture scanning Fourier ptychographic microscopy

    PubMed Central

    Ou, Xiaoze; Chung, Jaebum; Horstmeyer, Roarke; Yang, Changhuei

    2016-01-01

    Fourier ptychographic microscopy (FPM) is implemented through aperture scanning by an LCOS spatial light modulator at the back focal plane of the objective lens. This FPM configuration enables the capturing of the complex scattered field for a 3D sample both in the transmissive mode and the reflective mode. We further show that by combining with the compressive sensing theory, the reconstructed 2D complex scattered field can be used to recover the 3D sample scattering density. This implementation expands the scope of application for FPM and can be beneficial for areas such as tissue imaging and wafer inspection. PMID:27570705

  13. Particle-in-Cell Modeling of Magnetized Argon Plasma Flow Through Small Mechanical Apertures

    SciTech Connect

    Adam B. Sefkow and Samuel A. Cohen

    2009-04-09

    Motivated by observations of supersonic argon-ion flow generated by linear helicon-heated plasma devices, a three-dimensional particle-in-cell (PIC) code is used to study whether stationary electrostatic layers form near mechanical apertures intersecting the flow of magnetized plasma. By self-consistently evaluating the temporal evolution of the plasma in the vicinity of the aperture, the PIC simulations characterize the roles of the imposed aperture and applied magnetic field on ion acceleration. The PIC model includes ionization of a background neutral-argon population by thermal and superthermal electrons, the latter found upstream of the aperture. Near the aperture, a transition from a collisional to a collisionless regime occurs. Perturbations of density and potential, with mm wavelengths and consistent with ion acoustic waves, propagate axially. An ion acceleration region of length ~ 200-300 λD,e forms at the location of the aperture and is found to be an electrostatic double layer, with axially-separated regions of net positive and negative charge. Reducing the aperture diameter or increasing its length increases the double layer strength.

  14. Polar Codes

    DTIC Science & Technology

    2014-12-01

    density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes. iii CONTENTS EXECUTIVE SUMMARY...the most common. Many civilian systems use low density parity check (LDPC) FEC codes, and the Navy is planning to use LDPC for some future systems...other forward error correction methods: a turbo code, a low density parity check (LDPC) code, a Reed–Solomon code, and three convolutional codes

  15. Hand aperture patterns in prehension.

    PubMed

    Bongers, Raoul M; Zaal, Frank T J M; Jeannerod, Marc

    2012-06-01

    Although variations in the standard prehensile pattern can be found in the literature, these alternative patterns have never been studied systematically. This was the goal of the current paper. Ten participants picked up objects with a pincer grip. Objects (3, 5, or 7cm in diameter) were placed at 30, 60, 90, or 120cm from the hands' starting location. Usually the hand was opened gradually to a maximum immediately followed by hand closing, called the standard hand opening pattern. In the alternative opening patterns the hand opening was bumpy, or the hand aperture stayed at a plateau before closing started. Two participants in particular delayed the start of grasping with respect to start of reaching, with the delay time increasing with object distance. For larger object distances and smaller object sizes, the bumpy and plateau hand opening patterns were used more often. We tentatively concluded that the alternative hand opening patterns extended the hand opening phase, to arrive at the appropriate hand aperture at the appropriate time to close the hand for grasping the object. Variations in hand opening patterns deserve attention because this might lead to new insights into the coordination of reaching and grasping.

  16. Multiple arrested synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Shuster, J. S.

    1981-05-01

    This report contains the formulation and analysis of an airborne synthetic aperture rate scheme which employs a multiplicity of antennas with the displaced phase center antenna technique to detect slowly moving targets embedded in a severe clutter environment. The radar is evaluated using the target to clutter power ratio as the measure of performance. Noise is ignored in the analysis. An optimization scheme which maximizes this ratio is employed to obtain the optimum processor weighting. The performance of the MASAR processor with optimum weights is compared against that using target weights (composed of the target signal) and that using binomial weights (which, effectively, form an n-pulse canceller). Both the target and the clutter are modeled with the electric field backscattering coefficient. The target is modeled simply as a deterministically moving point scatterer with the same albedo as a point of clutter. The clutter is modeled as a homogeneous, isotropic, two dimensional, spatiotemporal random field for which only the correlation properties are required. The analysis shows that this radar, with its optimum weighting scheme, is a promising synthetic aperture concept for the detection of slowly moving targets immersed in strong clutter environments.

  17. Diffraction smoothing aperture for an optical beam

    DOEpatents

    Judd, O'Dean P.; Suydam, Bergen R.

    1976-01-01

    The disclosure is directed to an aperture for an optical beam having an irregular periphery or having perturbations imposed upon the periphery to decrease the diffraction effect caused by the beam passing through the aperture. Such apertures are particularly useful with high power solid state laser systems in that they minimize the problem of self-focusing which frequently destroys expensive components in such systems.

  18. Efficient entropy coding for scalable video coding

    NASA Astrophysics Data System (ADS)

    Choi, Woong Il; Yang, Jungyoup; Jeon, Byeungwoo

    2005-10-01

    The standardization for the scalable extension of H.264 has called for additional functionality based on H.264 standard to support the combined spatio-temporal and SNR scalability. For the entropy coding of H.264 scalable extension, Context-based Adaptive Binary Arithmetic Coding (CABAC) scheme is considered so far. In this paper, we present a new context modeling scheme by using inter layer correlation between the syntax elements. As a result, it improves coding efficiency of entropy coding in H.264 scalable extension. In simulation results of applying the proposed scheme to encoding the syntax element mb_type, it is shown that improvement in coding efficiency of the proposed method is up to 16% in terms of bit saving due to estimation of more adequate probability model.

  19. Ion mobility spectrometer with virtual aperture grid

    SciTech Connect

    Pfeifer, Kent B.; Rumpf, Arthur N.

    2010-11-23

    An ion mobility spectrometer does not require a physical aperture grid to prevent premature ion detector response. The last electrodes adjacent to the ion collector (typically the last four or five) have an electrode pitch that is less than the width of the ion swarm and each of the adjacent electrodes is connected to a source of free charge, thereby providing a virtual aperture grid at the end of the drift region that shields the ion collector from the mirror current of the approaching ion swarm. The virtual aperture grid is less complex in assembly and function and is less sensitive to vibrations than the physical aperture grid.

  20. Door assembly with shear layer control aperture

    NASA Technical Reports Server (NTRS)

    Kahn, William C. (Inventor); Johnston, John T. (Inventor); Fluegel, Kyle G. (Inventor)

    1996-01-01

    There is described a vehicle door assembly with shear layer control for controlling the airflow in and around an aperture in the vehicle fuselage. The vehicle door assembly consists of an upper door and a lower door, both slidably mounted to the exterior surface of the vehicle fuselage. In addition, an inner door is slidably mounted beneath the upper door. Beneath the inner door is an aperture assembly having an aperture opening positionable to be substantially flush with the exterior surface of the vehicle fuselage. Also provided are means for positioning the aperture assembly in an upward and downward direction in relation to the vehicle fuselage.

  1. Advanced optics experiments using nonuniform aperture functions

    NASA Astrophysics Data System (ADS)

    Wood, Lowell T.

    2013-05-01

    A method to create instructive, nonuniform aperture functions using spatial frequency filtering is described. The diffraction from a single slit in the Fresnel limit and the interference from a double slit in the Fraunhofer limit are spatially filtered to create electric field distributions across an aperture to produce apodization, inverse apodization or super-resolution, and apertures with phase shifts across their widths. The diffraction effects from these aperture functions are measured and calculated. The excellent agreement between the experimental results and the calculated results makes the experiment ideal for use in an advanced undergraduate or graduate optics laboratory to illustrate experimentally several effects in Fourier optics.

  2. Negative Transconductance in Apertured Electron Guns

    SciTech Connect

    Harris, J R; O'Shea, P G

    2007-09-21

    Passing an electron beam through an aperture can serve to reduce the beam current or change the transverse beam profile. For a sufficiently intense beam, space charge will drive a radial expansion of the beam, which may cause the current passing through the aperture to increase even though the current arriving at the aperture is decreasing. When a gridded electron gun is used, this may be expressed by stating that the transconductance of the apertured gun is negative. Here we explain this effect, and explore some of the key factors governing when it can occur and influencing its strength.

  3. GPU-based minimum variance beamformer for synthetic aperture imaging of the eye.

    PubMed

    Yiu, Billy Y S; Yu, Alfred C H

    2015-03-01

    Minimum variance (MV) beamforming has emerged as an adaptive apodization approach to bolster the quality of images generated from synthetic aperture ultrasound imaging methods that are based on unfocused transmission principles. In this article, we describe a new high-speed, pixel-based MV beamforming framework for synthetic aperture imaging to form entire frames of adaptively apodized images at real-time throughputs and document its performance in swine eye imaging case examples. Our framework is based on parallel computing principles, and its real-time operational feasibility was realized on a six-GPU (graphics processing unit) platform with 3,072 computing cores. This framework was used to form images with synthetic aperture imaging data acquired from swine eyes (based on virtual point-source emissions). Results indicate that MV-apodized image formation with video-range processing throughput (>20 fps) can be realized for practical aperture sizes (128 channels) and frames with λ/2 pixel spacing. Also, in a corneal wound detection experiment, MV-apodized images generated using our framework revealed apparent contrast enhancement of the wound site (10.8 dB with respect to synthetic aperture images formed with fixed apodization). These findings indicate that GPU-based MV beamforming can, in real time, potentially enhance image quality when performing synthetic aperture imaging that uses unfocused firings.

  4. Multifocal interferometric synthetic aperture microscopy

    PubMed Central

    Xu, Yang; Chng, Xiong Kai Benjamin; Adie, Steven G.; Boppart, Stephen A.; Scott Carney, P.

    2014-01-01

    There is an inherent trade-off between transverse resolution and depth of field (DOF) in optical coherence tomography (OCT) which becomes a limiting factor for certain applications. Multifocal OCT and interferometric synthetic aperture microscopy (ISAM) each provide a distinct solution to the trade-off through modification to the experiment or via post-processing, respectively. In this paper, we have solved the inverse problem of multifocal OCT and present a general algorithm for combining multiple ISAM datasets. Multifocal ISAM (MISAM) uses a regularized combination of the resampled datasets to bring advantages of both multifocal OCT and ISAM to achieve optimal transverse resolution, extended effective DOF and improved signal-to-noise ratio. We present theory, simulation and experimental results. PMID:24977909

  5. Large aperture Fresnel telescopes/011

    SciTech Connect

    Hyde, R.A., LLNL

    1998-07-16

    At Livermore we`ve spent the last two years examining an alternative approach towards very large aperture (VLA) telescopes, one based upon transmissive Fresnel lenses rather than on mirrors. Fresnel lenses are attractive for VLA telescopes because they are launchable (lightweight, packagable, and deployable) and because they virtually eliminate the traditional, very tight, surface shape requirements faced by reflecting telescopes. Their (potentially severe) optical drawback, a very narrow spectral bandwidth, can be eliminated by use of a second (much smaller) chromatically-correcting Fresnel element. This enables Fresnel VLA telescopes to provide either single band ({Delta}{lambda}/{lambda} {approximately} 0.1), multiple band, or continuous spectral coverage. Building and fielding such large Fresnel lenses will present a significant challenge, but one which appears, with effort, to be solvable.

  6. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships.

  7. Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 2: Appendix

    NASA Technical Reports Server (NTRS)

    Wiley, C. A.; Chang, M. U.

    1981-01-01

    A number of topics supporting the systems analysis of a multifrequency aperture-synthesizing microwave radiometer system are discussed. Fellgett's (multiple) advantage, interferometer mapping behavior, mapping geometry, image processing programs, and sampling errors are among the topics discussed. A FORTRAN program code is given.

  8. Enhanced Optical Transmission with Coaxial Apertures

    NASA Astrophysics Data System (ADS)

    Haftel, Michael; Schlockermann, Carl; Orbons, Shannon; Roberts, Ann; Jamieson, David; Freeman, Darren; Luther-Davies, Barry

    2007-03-01

    Recently it has been shown that ``cylindrical'' surface plasmons (CSP's) on cylindrical interfaces of coaxial ring apertures produce a new form of extraordinary optical transmission (EOT) that extends to ever increasing wavelengths as the dielectric ring narrows. Using analytic and FDTD calculations we present some of the consequences of CSP's on EOT as well as experimental confirmation of such effects. We find that EOT, even with cylindrical apertures, is aided by the increase in cutoff wavelength due to CSP's, which is a consequence of the mode structure of individual apertures. CSP effects also explain most of the long-wavelength features of transmission spectra measured for CR apertures. We also show that CSP's can be ``spoofed'' at low frequencies by coaxial apertures in metamaterials consisting of a (macroscopic) periodic dielectric structure embedded in a perfect conductor. F. I. Baida et al., Phys. Rev. B 67, 155314 (2003); M.I Haftel et al., Appl. Phys. Lett. 88, 193104 (2006).

  9. Variable aperture collimator for high energy radiation

    DOEpatents

    Hill, Ronald A.

    1984-05-22

    An apparatus is disclosed providing a variable aperture energy beam collimator. A plurality of beam opaque blocks are in sliding interface edge contact to form a variable aperture. The blocks may be offset at the apex angle to provide a non-equilateral aperture. A plurality of collimator block assemblies may be employed for providing a channel defining a collimated beam. Adjacent assemblies are inverted front-to-back with respect to one another for preventing noncollimated energy from emerging from the apparatus. An adjustment mechanism comprises a cable attached to at least one block and a hand wheel mechanism for operating the cable. The blocks are supported by guide rods engaging slide brackets on the blocks. The guide rods are pivotally connected at each end to intermediate actuators supported on rotatable shafts to change the shape of the aperture. A divergent collimated beam may be obtained by adjusting the apertures of adjacent stages to be unequal.

  10. Micro Ring Grating Spectrometer with Adjustable Aperture

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Choi, Sang H. (Inventor)

    2012-01-01

    A spectrometer includes a micro-ring grating device having coaxially-aligned ring gratings for diffracting incident light onto a target focal point, a detection device for detecting light intensity, one or more actuators, and an adjustable aperture device defining a circular aperture. The aperture circumscribes a target focal point, and directs a light to the detection device. The aperture device is selectively adjustable using the actuators to select a portion of a frequency band for transmission to the detection device. A method of detecting intensity of a selected band of incident light includes directing incident light onto coaxially-aligned ring gratings of a micro-ring grating device, and diffracting the selected band onto a target focal point using the ring gratings. The method includes using an actuator to adjust an aperture device and pass a selected portion of the frequency band to a detection device for measuring the intensity of the selected portion.

  11. Efficient parallel implementation of polarimetric synthetic aperture radar data processing

    NASA Astrophysics Data System (ADS)

    Martinez, Sergio S.; Marpu, Prashanth R.; Plaza, Antonio J.

    2014-10-01

    This work investigates the parallel implementation of polarimetric synthetic aperture radar (POLSAR) data processing chain. Such processing can be computationally expensive when large data sets are processed. However, the processing steps can be largely implemented in a high performance computing (HPC) environ- ment. In this work, we studied different aspects of the computations involved in processing the POLSAR data and developed an efficient parallel scheme to achieve near-real time performance. The algorithm is implemented using message parsing interface (MPI) framework in this work, but it can be easily adapted for other parallel architectures such as general purpose graphics processing units (GPGPUs).

  12. Aperture Increase Options for the Dutch Open Telescope

    NASA Astrophysics Data System (ADS)

    Hammerschlag, R. H.; Bettonvil, F. C. M.; Jägers, A. P. L.; Rutten, R. J.

    2007-05-01

    This paper is an invitation to the international community to participate in the usage and a substantial upgrade of the Dutch Open Telescope on La Palma (DOT, http://dot.astro.uu.nl). We first give a brief overview of the approach, design, and current science capabilities of the DOT. It became a successful 0.2-arcsec-resolution solar movie producer through its combination of (i) an excellent site, (ii) effective wind flushing through the fully open design and construction of both the 45-cm telescope and the 15-m support tower, (iii) special designs which produce extraordinary pointing stability of the tower, equatorial mount, and telescope, (iv) simple and excellent optics with minimum wavefront distortion, and (v) large-volume speckle reconstruction including narrow-band processing. The DOT's multi-camera multi-wavelength speckle imaging system samples the solar photosphere and chromosphere simultaneously in various optical continua, the G band, Ca II H (tunable throughout the blue wing), and Hα (tunable throughout the line). The resulting DOT data sets are all public. The DOT database (http://dotdb.phys.uu.nl/DOT) now contains many tomographic image sequences with 0.2-0.3 arcsec resolution and up to multi-hour duration. You are welcome to pull them over for analysis. The main part of this contribution outlines DOT upgrade designs implementing larger aperture. The motivation for aperture increase is the recognition that optical solar physics needs the substantially larger telescope apertures that became useful with the advent of adaptive optics and viable through the DOT's open principle, both for photospheric polarimetry at high resolution and high sensitivity and for chromospheric fine-structure diagnosis at high cadence and full spectral sampling. Our upgrade designs for the DOT are presented in an incremental sequence of five options of which the simplest (Option I) achieves 1.4 m aperture using the present tower, mount, fold-away canopy, and multi

  13. Finding Optimal Apertures in Kepler Data

    NASA Astrophysics Data System (ADS)

    Smith, Jeffrey C.; Morris, Robert L.; Jenkins, Jon M.; Bryson, Stephen T.; Caldwell, Douglas A.; Girouard, Forrest R.

    2016-12-01

    With the loss of two spacecraft reaction wheels precluding further data collection for the Kepler primary mission, even greater pressure is placed on the processing pipeline to eke out every last transit signal in the data. To that end, we have developed a new method to optimize the Kepler Simple Aperture Photometry (SAP) photometric apertures for both planet detection and minimization of systematic effects. The approach uses a per cadence modeling of the raw pixel data and then performs an aperture optimization based on signal-to-noise ratio and the Kepler Combined Differential Photometric Precision (CDPP), which is a measure of the noise over the duration of a reference transit signal. We have found the new apertures to be superior to the previous Kepler apertures. We can now also find a per cadence flux fraction in aperture and crowding metric. The new approach has also been proven to be robust at finding apertures in K2 data that help mitigate the larger motion-induced systematics in the photometry. The method further allows us to identify errors in the Kepler and K2 input catalogs.

  14. Walking through Apertures in Individuals with Stroke

    PubMed Central

    Higuchi, Takahiro

    2017-01-01

    Objective Walking through a narrow aperture requires unique postural configurations, i.e., body rotation in the yaw dimension. Stroke individuals may have difficulty performing the body rotations due to motor paralysis on one side of their body. The present study was therefore designed to investigate how successfully such individuals walk through apertures and how they perform body rotation behavior. Method Stroke fallers (n = 10), stroke non-fallers (n = 13), and healthy controls (n = 23) participated. In the main task, participants walked for 4 m and passed through apertures of various widths (0.9–1.3 times the participant’s shoulder width). Accidental contact with the frame of an aperture and kinematic characteristics at the moment of aperture crossing were measured. Participants also performed a perceptual judgment task to measure the accuracy of their perceived aperture passability. Results and Discussion Stroke fallers made frequent contacts on their paretic side; however, the contacts were not frequent when they penetrated apertures from their paretic side. Stroke fallers and non-fallers rotated their body with multiple steps, rather than a single step, to deal with their motor paralysis. Although the minimum passable width was greater for stroke fallers, the body rotation angle was comparable among groups. This suggests that frequent contact in stroke fallers was due to insufficient body rotation. The fact that there was no significant group difference in the perceived aperture passability suggested that contact occurred mainly due to locomotor factors rather than perceptual factors. Two possible explanations (availability of vision and/or attention) were provided as to why accidental contact on the paretic side did not occur frequently when stroke fallers penetrated the apertures from their paretic side. PMID:28103299

  15. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  16. Parallel optical nanolithography using nanoscale bowtie apertures

    NASA Astrophysics Data System (ADS)

    Uppuluri, Sreemanth M. V.

    Over the past two decades various branches of science and engineering have developed techniques for producing nanoscopic light sources for different applications such as imaging, detection and fabrication. These areas include near-field scanning optical microscopy (NSOM), surface-enhanced Raman scattering and detection (SERS), plasmonics and so on. In particular nanolithography techniques have been developed to produce feature sizes in the sub-100 nm length scales. These processes include variations of standard photolithography process to achieve high resolution, optical fiber-based near-field lithography, surface plasmon assisted nanolithography, interference optical lithography and so on. This work presents a study of the viability of using nanoscale bowtie apertures for nanolithography. Bowtie apertures exhibit a unique property of supporting a propagating TE10 mode at wavelengths of light in the visible and near-UV regimes. The energy of this mode is concentrated in the gap region of the aperture and thus these apertures have the potential to produce high intensity nanoscale light spots that can be used for nano-patterning applications. We demonstrate this capability of nanoscale bowtie apertures by patterning photoresist to obtain resolution less than 100 nm. Initially we present the results from static lithography experiments and show that the ridge apertures of different shapes -- C, H and bowtie produce holes in the photoresist of dimensions around 50-60 nm. Subsequently we address the issues involved in using these apertures for nano directwriting. We show that chromium thin-films offer a viable solution to produce high quality metal films of surface roughness less than 1 nm over an area of 25 mum2. This is indeed important to achieve intimate contact between the apertures and the photoresist surface. We also explain ways to decrease friction between the mask and photoresist surfaces during nano direct-writing. In addition, to decrease the contact force

  17. Passive microwave imaging by aperture synthesis technology

    NASA Astrophysics Data System (ADS)

    Lang, Liang; Zhang, Zuyin; Guo, Wei; Gui, Liangqi

    2007-11-01

    In order to verify the theory of aperture synthesis at low expense, two-channel ka-band correlation radiometer which is basic part of synthetic aperture radiometer is designed firstly before developing the multi-channel synthetic aperture radiometer. The performance of two-channel correlation radiometer such as stability and coherence of visibility phase are tested in the digital correlation experiment. Subsequently all required baselines are acquired by moving the antenna pair sequentially, corresponding samples of the visibility function are measured and the image of noise source is constructed using an inverse Fourier transformation.

  18. Applications of Adaptive Learning Controller to Synthetic Aperture Radar.

    DTIC Science & Technology

    1985-02-01

    FIGURE 37. Location of Two Sub- Phase Histories to be Utilized in Estimating Misfocus Coefficients A and C . . . A8 FIGURES 38.-94. ALC Learning Curves ...FIGURES (Concl uded) FIGURE 23. ALC Learning Curve .... .................. ... 45 .- " FIGURE 24. ALC Learning Curve ......... ................. 47 FIGURE...25. ALC Learning Curve .... .................. ... 48 FIGURE 26. ALC Learning Curve ....... .... ... .... 50 FIGURE 27. ALC Learning Curve

  19. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  20. Solar Adaptive Optics.

    PubMed

    Rimmele, Thomas R; Marino, Jose

    Adaptive optics (AO) has become an indispensable tool at ground-based solar telescopes. AO enables the ground-based observer to overcome the adverse effects of atmospheric seeing and obtain diffraction limited observations. Over the last decade adaptive optics systems have been deployed at major ground-based solar telescopes and revitalized ground-based solar astronomy. The relatively small aperture of solar telescopes and the bright source make solar AO possible for visible wavelengths where the majority of solar observations are still performed. Solar AO systems enable diffraction limited observations of the Sun for a significant fraction of the available observing time at ground-based solar telescopes, which often have a larger aperture than equivalent space based observatories, such as HINODE. New ground breaking scientific results have been achieved with solar adaptive optics and this trend continues. New large aperture telescopes are currently being deployed or are under construction. With the aid of solar AO these telescopes will obtain observations of the highly structured and dynamic solar atmosphere with unprecedented resolution. This paper reviews solar adaptive optics techniques and summarizes the recent progress in the field of solar adaptive optics. An outlook to future solar AO developments, including a discussion of Multi-Conjugate AO (MCAO) and Ground-Layer AO (GLAO) will be given.

  1. Phased Array Mirror Extendible Large Aperture (PAMELA) Optics Adjustment

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Scientists at Marshall's Adaptive Optics Lab demonstrate the Wave Front Sensor alignment using the Phased Array Mirror Extendible Large Aperture (PAMELA) optics adjustment. The primary objective of the PAMELA project is to develop methods for aligning and controlling adaptive optics segmented mirror systems. These systems can be used to acquire or project light energy. The Next Generation Space Telescope is an example of an energy acquisition system that will employ segmented mirrors. Light projection systems can also be used for power beaming and orbital debris removal. All segmented optical systems must be adjusted to provide maximum performance. PAMELA is an on going project that NASA is utilizing to investigate various methods for maximizing system performance.

  2. Synthetic Aperture Radar Missions Study Report

    NASA Technical Reports Server (NTRS)

    Bard, S.

    2000-01-01

    This report reviews the history of the LightSAR project and summarizes actions the agency can undertake to support industry-led efforts to develop an operational synthetic aperture radar (SAR) capability in the United States.

  3. An empirical explanation of aperture effects.

    PubMed

    Sung, Kyongje; Wojtach, William T; Purves, Dale

    2009-01-06

    The perceived direction of a moving line changes, often markedly, when viewed through an aperture. Although several explanations of this remarkable effect have been proposed, these accounts typically focus on the percepts elicited by a particular type of aperture and offer no biological rationale. Here, we test the hypothesis that to contend with the inherently ambiguous nature of motion stimuli the perceived direction of objects moving behind apertures of different shapes is determined by a wholly empirical strategy of visual processing. An analysis of moving line stimuli generated by objects projected through apertures shows that the directions of motion subjects report in psychophysical testing is accounted for by the frequency of occurrence of the 2D directions of stimuli generated by simulated 3D sources. The completeness of these predictions supports the conclusion that the direction of perceived motion is fully determined by accumulated behavioral experience with sources whose physical motions cannot be conveyed by image sequences as such.

  4. Contour-Mapping Synthetic-Aperture Radar

    NASA Technical Reports Server (NTRS)

    Goldstein, R. M.; Caro, E. R.; Wu, C.

    1985-01-01

    Airborne two-antenna synthetic-aperture-radar (SAR) interferometric system provides data processed to yield terrain elevation as well as reflectedintensity information. Relative altitudes of terrain points measured to within error of approximately 25 m.

  5. Shock wave absorber having apertured plate

    DOEpatents

    Shin, Y.W.; Wiedermann, A.H.; Ockert, C.E.

    1983-08-26

    The shock or energy absorber disclosed herein utilizes an apertured plate maintained under the normal level of liquid flowing in a piping system and disposed between the normal liquid flow path and a cavity pressurized with a compressible gas. The degree of openness (or porosity) of the plate is between 0.01 and 0.60. The energy level of a shock wave travelling down the piping system thus is dissipated by some of the liquid being jetted through the apertured plate toward the cavity. The cavity is large compared to the quantity of liquid jetted through the apertured plate, so there is little change in its volume. The porosity of the apertured plate influences the percentage of energy absorbed.

  6. Shock wave absorber having apertured plate

    DOEpatents

    Shin, Yong W.; Wiedermann, Arne H.; Ockert, Carl E.

    1985-01-01

    The shock or energy absorber disclosed herein utilizes an apertured plate maintained under the normal level of liquid flowing in a piping system and disposed between the normal liquid flow path and a cavity pressurized with a compressible gas. The degree of openness (or porosity) of the plate is between 0.01 and 0.60. The energy level of a shock wave travelling down the piping system thus is dissipated by some of the liquid being jetted through the apertured plate toward the cavity. The cavity is large compared to the quantity of liquid jetted through the apertured plate, so there is little change in its volume. The porosity of the apertured plate influences the percentage of energy absorbed.

  7. Eyeglass. 1. Very large aperture diffractive telescopes.

    PubMed

    Hyde, R A

    1999-07-01

    The Eyeglass is a very large aperture (25-100-m) space telescope consisting of two distinct spacecraft, separated in space by several kilometers. A diffractive lens provides the telescope s large aperture, and a separate, much smaller, space telescope serves as its mobile eyepiece. Use of a transmissive diffractive lens solves two basic problems associated with very large aperture space telescopes; it is inherently launchable (lightweight, packagable, and deployable) it and virtually eliminates the traditional, very tight surface shape tolerances faced by reflecting apertures. The potential drawback to use of a diffractive primary (very narrow spectral bandwidth) is eliminated by corrective optics in the telescope s eyepiece; the Eyeglass can provide diffraction-limited imaging with either single-band (Deltalambda/lambda approximately 0.1), multiband, or continuous spectral coverage.

  8. Eyeglass. 1. Very large aperture diffractive telescopes

    SciTech Connect

    Hyde, R.A.

    1999-07-01

    The Eyeglass is a very large aperture (25{endash}100-m) space telescope consisting of two distinct spacecraft, separated in space by several kilometers. A diffractive lens provides the telescope{close_quote}s large aperture, and a separate, much smaller, space telescope serves as its mobile eyepiece. Use of a transmissive diffractive lens solves two basic problems associated with very large aperture space telescopes; it is inherently launchable (lightweight, packagable, and deployable) it and virtually eliminates the traditional, very tight surface shape tolerances faced by reflecting apertures. The potential drawback to use of a diffractive primary (very narrow spectral bandwidth) is eliminated by corrective optics in the telescope{close_quote}s eyepiece; the Eyeglass can provide diffraction-limited imaging with either single-band ({Delta}{lambda}/{lambda}{approximately}0.1), multiband, or continuous spectral coverage. {copyright} 1999 Optical Society of America

  9. Active multi-aperture imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Miller, Nicholas J.; Widiker, Jeffrey J.; McManamon, Paul F.; Haus, Joseph W.

    2012-06-01

    We describe our Innovative Multi Aperture Gimbaless Electro-Optical (IMAGE) testbed which uses coherent detection of the complex field reflected off a diffuse target with seven hexagonally arranged apertures. The seven measured optical fields are then phased with a digital optimization algorithm to synthesize a composite image whose angular resolution exceeds that of a single aperture. This same post-detection phasing algorithm also corrects aberrations induced by imperfect optics and a turbulent atmospheric path. We present the coherent imaging sub-aperture design used in the IMAGE array as well as the design of a compact range used to perform scaled tests of the IMAGE array. We present some experimental results of imaging diffuse targets in the compact range with two phase screens which simulates a ~7[Km] propagation path through distributed atmospheric turbulence.

  10. Very Large Aperture Diffractive Space Telescope

    SciTech Connect

    Hyde, Roderick Allen

    1998-04-20

    A very large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass ''aiming'' at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The magnifying glass includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the magnifying glass, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets.

  11. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  12. Large aperture ac interferometer for optical testing.

    PubMed

    Moore, D T; Murray, R; Neves, F B

    1978-12-15

    A 20-cm clear aperture modified Twyman-Green interferometer is described. The system measures phase with an AC technique called phase-lock interferometry while scanning the aperture with a dual galvanometer scanning system. Position information and phase are stored in a minicomputer with disk storage. This information is manipulated with associated software, and the wavefront deformation due to a test component is graphically displayed in perspective and contour on a CRT terminal.

  13. Aperture Engineering for Impulse Radiating Antennas

    NASA Astrophysics Data System (ADS)

    Tyo, J. S.

    The past several years have seen the development of an improved understanding of the role of aperture design for impulse radiating antennas (IRAs). The understanding began with the emergence of the concept of prompt aperture efficiency for ultra-wideband (UWB) antennas. This emergence allowed us to concentrate on ways to shape the aperture and control the field distribution within the aperture in order to maximize the prompt response from IRAs. In many high voltage UWB applications it is impossible to increase the radiated fields by increasing the source power. This is because in such instances the sources are already at the limits of linear electromagnetics. In these cases, we would like to come up with methods to improve the radiated field without altering the input impedance of the IRA. In this paper we will explore several such methods including the position of the feed arms to maximize field uniformity, the shaping of the aperture to increase radiated fields by reducing the aperture size, the relative sizing of the reflector (or lens) and the feed horn, and actually reorienting the currents on the reflector by controlling the direction of current flow. One common thread appears in all of these studies, that is the influence of Dr. Carl Baum on the direction and development of the work.

  14. Application of a geocentrifuge and sterolithographically fabricated apertures to multiphase flow in complex fracture apertures.

    SciTech Connect

    Glenn E. McCreery; Robert D. Stedtfeld; Alan T. Stadler; Daphne L. Stoner; Paul Meakin

    2005-09-01

    A geotechnical centrifuge was used to investigate unsaturated multiphase fluid flow in synthetic fracture apertures under a variety of flow conditions. The geocentrifuge subjected the fluids to centrifugal forces allowing the Bond number to be systematically changed without adjusting the fracture aperture of the fluids. The fracture models were based on the concept that surfaces generated by the fracture of brittle geomaterials have a self-affine fractal geometry. The synthetic fracture surfaces were fabricated from a transparent epoxy photopolymer using sterolithography, and fluid flow through the transparent fracture models was monitored by an optical image acquisition system. Aperture widths were chosen to be representative of the wide range of geological fractures in the vesicular basalt that lies beneath the Idaho Nation Laboratory (INL). Transitions between different flow regimes were observed as the acceleration was changed under constant flow conditions. The experiments showed the transition between straight and meandering rivulets in smooth walled apertures (aperture width = 0.508 mm), the dependence of the rivulet width on acceleration in rough walled fracture apertures (average aperture width = 0.25 mm), unstable meandering flow in rough walled apertures at high acceleration (20g) and the narrowing of the wetted region with increasing acceleration during the penetration of water into an aperture filled with wetted particles (0.875 mm diameter glass spheres).

  15. Apparent apertures from ground penetrating radar data and their relation to heterogeneous aperture fields

    NASA Astrophysics Data System (ADS)

    Shakas, A.; Linde, N.

    2017-03-01

    Considering fractures with heterogeneous aperture distributions, we explore the reliability of constant-aperture estimates derived from ground penetrating radar (GPR) reflection data. We generate geostatistical fracture aperture realizations that are characterized by the same mean-aperture and variance, but different Hurst exponents and cutoff lengths. For each of the 16 classes of heterogeneity considered, we generate 1000 fracture realizations from which we compute GPR reflection data using our recent effective-dipole forward model. We then use each (noise-contaminated) dataset individually to invert for a single 'apparent' aperture, i.e., we assume that the fracture aperture is homogeneous. We find that the inferred 'apparent' apertures are only reliable when fracture heterogeneity is non-fractal (the Hurst exponent is close to 1) and the scale of the dominant aperture heterogeneities is larger than the first Fresnel zone. These results are a direct consequence of the non-linear character of the thin-bed reflection coefficients. As fracture heterogeneity is ubiquitous and often fractal, our results suggest that robust field-based inference of fracture aperture can only be achieved by accounting for the non-linear response of fracture heterogeneity on GPR data.

  16. How Do I Fit through That Gap? Navigation through Apertures in Adults with and without Developmental Coordination Disorder

    PubMed Central

    Wilmut, Kate; Du, Wenchong; Barnett, Anna L

    2015-01-01

    During everyday life we move around busy environments and encounter a range of obstacles, such as a narrow aperture forcing us to rotate our shoulders in order to pass through. In typically developing individuals the decision to rotate the shoulders is body scaled and this movement adaptation is temporally and spatially tailored to the size of the aperture. This is done effortlessly although it actually involves many complex skills. For individuals with Developmental Coordination Disorder (DCD) moving in a busy environment and negotiating obstacles presents a real challenge which can negatively impact on safety and participation in motor activities in everyday life. However, we have a limited understanding of the nature of the difficulties encountered. Therefore, this current study considered how adults with DCD make action judgements and movement adaptations while navigating apertures. Fifteen adults with DCD and 15 typically developing (TD) controls passed through a series of aperture sizes which were scaled to body size (0.9-2.1 times shoulder width). Spatial and temporal characteristics of movement were collected over the approach phase and while crossing the aperture. The decision to rotate the shoulders was not scaled in the same way for the two groups, with the adults with DCD showing a greater propensity to turn for larger apertures compared to the TD adults when body size alone was accounted for. However, when accounting for degree of lateral trunk movement and variability on the approach, we no longer saw differences between the two groups. In terms of the movement adaptations, the adults with DCD approached an aperture differently when a shoulder rotation was required and then adapted their movement sooner compared to their typical peers. These results point towards an adaptive strategy in adults with DCD which allows them to account for their movement difficulties and avoid collision. PMID:25874635

  17. Zinc selenide-based large aperture photo-controlled deformable mirror.

    PubMed

    Quintavalla, Martino; Bonora, Stefano; Natali, Dario; Bianco, Andrea

    2016-06-01

    Realization of large aperture deformable mirrors with a large density of actuators is important in many applications, and photo-controlled deformable mirrors (PCDMs) represent an innovation. Herein we show that PCDMs are scalable realizing a 2-inch aperture device based on a polycrystalline zinc selenide (ZnSe) as the photoconductive substrate and a thin polymeric reflective membrane. ZnSe is electrically characterized and analyzed through a model that we previously introduced. The PCDM is then optically tested, demonstrating its capabilities in adaptive optics.

  18. Directional synthetic aperture flow imaging.

    PubMed

    Jensen, Jørgen Arendt; Nikolov, Svetoslav Ivanov

    2004-09-01

    A method for flow estimation using synthetic aperture imaging and focusing along the flow direction is presented. The method can find the correct velocity magnitude for any flow angle, and full color flow images can be measured using only 32 to 128 pulse emissions. The approach uses spherical wave emissions with a number of defocused elements and a linear frequency-modulated pulse (chirp) to improve the signal-to-noise ratio. The received signals are dynamically focused along the flow direction and these signals are used in a cross-correlation estimator for finding the velocity magnitude. The flow angle is manually determined from the B-mode image. The approach can be used for both tissue and blood velocity determination. The approach was investigated using both simulations and a flow system with a laminar flow. The flow profile was measured with a commercial 7.5 MHz linear array transducer. A plastic tube with an internal diameter of 17 mm was used with an EcoWatt 1 pump generating a laminar, stationary flow. The velocity profile was measured for flow angles of 90 and 60 degrees. The RASMUS research scanner was used for acquiring radio frequency (RF) data from 128 elements of the array, using 8 emissions with 11 elements in each emission. A 20-micros chirp was used during emission. The RF data were subsequently beamformed off-line and stationary echo canceling was performed. The 60-degree flow with a peak velocity of 0.15 m/s was determined using 16 groups of 8 emissions, and the relative standard deviation was 0.36% (0.65 mm/s). Using the same setup for purely transverse flow gave a standard deviation of 1.2% (2.1 mm/s). Variation of the different parameters revealed the sensitivity to number of lines, angle deviations, length of correlation interval, and sampling interval. An in vivo image of the carotid artery and jugular vein of a healthy 29-year-old volunteer was acquired. A full color flow image using only 128 emissions could be made with a high

  19. CXTFIT/Excel A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    SciTech Connect

    Tang, Guoping; Mayes, Melanie; Parker, Jack C; Jardine, Philip M

    2010-01-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.

  20. CXTFIT/Excel-A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    NASA Astrophysics Data System (ADS)

    Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.

    2010-09-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.

  1. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  2. Task 3: PNNL Visit by JAEA Researchers to Participate in TODAM Code Applications to Fukushima Rivers and to Evaluate the Feasibility of Adaptation of FLESCOT Code to Simulate Radionuclide Transport in the Pacific Ocean Coastal Water Around Fukushima

    SciTech Connect

    Onishi, Yasuo

    2013-03-29

    Four JAEA researchers visited PNNL for two weeks in February, 2013 to learn the PNNL-developed, unsteady, one-dimensional, river model, TODAM and the PNNL-developed, time-dependent, three dimensional, coastal water model, FLESCOT. These codes predict sediment and contaminant concentrations by accounting sediment-radionuclide interactions, e.g., adsorption/desorption and transport-deposition-resuspension of sediment-sorbed radionuclides. The objective of the river and coastal water modeling is to simulate • 134Cs and 137Cs migration in Fukushima rivers and the coastal water, and • their accumulation in the river and ocean bed along the Fukushima coast. Forecasting the future cesium behavior in the river and coastal water under various scenarios would enable JAEA to assess the effectiveness of various on-land remediation activities and if required, possible river and coastal water clean-up operations to reduce the contamination of the river and coastal water, agricultural products, fish and other aquatic biota. PNNL presented the following during the JAEA visit to PNNL: • TODAM and FLESCOT’s theories and mathematical formulations • TODAM and FLESCOT model structures • Past TODAM and FLESCOT applications • Demonstrating these two codes' capabilities by applying them to simple hypothetical river and coastal water cases. • Initial application of TODAM to the Ukedo River in Fukushima and JAEA researchers' participation in its modeling. PNNL also presented the relevant topics relevant to Fukushima environmental assessment and remediation, including • PNNL molecular modeling and EMSL computer facilities • Cesium adsorption/desorption characteristics • Experiences of connecting molecular science research results to macro model applications to the environment • EMSL tour • Hanford Site road tour. PNNL and JAEA also developed future course of actions for joint research projects on the Fukushima environmental and remediation assessments.

  3. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  4. Ethical coding.

    PubMed

    Resnik, Barry I

    2009-01-01

    It is ethical, legal, and proper for a dermatologist to maximize income through proper coding of patient encounters and procedures. The overzealous physician can misinterpret reimbursement requirements or receive bad advice from other physicians and cross the line from aggressive coding to coding fraud. Several of the more common problem areas are discussed.

  5. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  6. Aperture effects in squid jet propulsion.

    PubMed

    Staaf, Danna J; Gilly, William F; Denny, Mark W

    2014-05-01

    Squid are the largest jet propellers in nature as adults, but as paralarvae they are some of the smallest, faced with the inherent inefficiency of jet propulsion at a low Reynolds number. In this study we describe the behavior and kinematics of locomotion in 1 mm paralarvae of Dosidicus gigas, the smallest squid yet studied. They swim with hop-and-sink behavior and can engage in fast jets by reducing the size of the mantle aperture during the contraction phase of a jetting cycle. We go on to explore the general effects of a variable mantle and funnel aperture in a theoretical model of jet propulsion scaled from the smallest (1 mm mantle length) to the largest (3 m) squid. Aperture reduction during mantle contraction increases propulsive efficiency at all squid sizes, although 1 mm squid still suffer from low efficiency (20%) because of a limited speed of contraction. Efficiency increases to a peak of 40% for 1 cm squid, then slowly declines. Squid larger than 6 cm must either reduce contraction speed or increase aperture size to maintain stress within maximal muscle tolerance. Ecological pressure to maintain maximum velocity may lead them to increase aperture size, which reduces efficiency. This effect might be ameliorated by nonaxial flow during the refill phase of the cycle. Our model's predictions highlight areas for future empirical work, and emphasize the existence of complex behavioral options for maximizing efficiency at both very small and large sizes.

  7. High numerical aperture focusing of singular beams

    NASA Astrophysics Data System (ADS)

    Normatov, Alexander; Spektor, Boris; Shamir, Joseph

    2009-02-01

    Rigorous vector analysis of high numerical aperture optical systems encounters severe difficulties. While existing analytic methods, based on the Richards-Wolf approach, allow focusing of nearly planar incident wavefronts, these methods break down for beams possessing considerable phase jumps, such as beams containing phase singularities. This work was motivated by the need to analyze a recently introduced metrological application of singular beams that demonstrated an experimental sensitivity of 20nm under a moderate numerical aperture of 0.4. One of the possibilities to obtain even better sensitivity is by increasing the numerical aperture of the optical system. In this work we address the issue of high numerical aperture focusing of the involved singular beams. Our solution exploits the superposition principle to evaluate the three dimensional focal distribution of the electromagnetic field provided the illuminating wavefront can be described as having piecewise quasi constant phase. A brief overview of singular beam microscopy is followed by deeper discussion of the involved high numerical aperture focusing issue. Further, a few examples of different singular beam focal field distributions are presented.

  8. Bar-Code-Scribing Tool

    NASA Technical Reports Server (NTRS)

    Badinger, Michael A.; Drouant, George J.

    1991-01-01

    Proposed hand-held tool applies indelible bar code to small parts. Possible to identify parts for management of inventory without tags or labels. Microprocessor supplies bar-code data to impact-printer-like device. Device drives replaceable scribe, which cuts bar code on surface of part. Used to mark serially controlled parts for military and aerospace equipment. Also adapts for discrete marking of bulk items used in food and pharmaceutical processing.

  9. Solar energy apparatus with apertured shield

    NASA Technical Reports Server (NTRS)

    Collings, Roger J. (Inventor); Bannon, David G. (Inventor)

    1989-01-01

    A protective apertured shield for use about an inlet to a solar apparatus which includesd a cavity receiver for absorbing concentrated solar energy. A rigid support truss assembly is fixed to the periphery of the inlet and projects radially inwardly therefrom to define a generally central aperture area through which solar radiation can pass into the cavity receiver. A non-structural, laminated blanket is spread over the rigid support truss in such a manner as to define an outer surface area and an inner surface area diverging radially outwardly from the central aperture area toward the periphery of the inlet. The outer surface area faces away from the inlet and the inner surface area faces toward the cavity receiver. The laminated blanket includes at least one layer of material, such as ceramic fiber fabric, having high infra-red emittance and low solar absorption properties, and another layer, such as metallic foil, of low infra-red emittance properties.

  10. A new active method to correct for the effects of complex apertures on coronagraph performance

    NASA Astrophysics Data System (ADS)

    Mazoyer, Johan; Pueyo, Laurent; N'Diaye, Mamadou; Fogarty, Kevin; Perrin, Marshall D.; Soummer, Remi; Norman, Colin Arthur

    2017-01-01

    The increasing complexity of the aperture geometry of the future space (WFIRST, LUVOIR) and ground based-telescope (E-ELT, TMT) will limit the performance of the next generation of coronagraphic instruments for high contrast imaging of exoplanets.We propose here a new closed-loop optimization technique to use the deformable mirrors to correct for the effects of complex apertures on coronagraph performance. This method is a new alternative to the ACAD technique previously developed by our group. This technique allows the use of any coronagraph designed for continuous apertures, with complex, segmented, apertures, maintaining high performance in contrast and throughput. Finally, this closed loop technique allows flexibility to adapt for changing pupil geometries (e.g. in case of segment failure or maintenance for ground-based telescopes), or "manufacturing imperfections in the coronagraph assembly and alignment.We present a numerical study on several pupil geometries (segmented LUVOIR type aperture, WFIRST, ELTs) for which we obtained high contrast levels with several deformable mirror setups (size, number of actuators, separation between them), coronagraphs (apodized pupil lyot and vortex coronagraphs) and spectral bandwidths. Finally, using the results of this study, we will present recommendations for future coronagraphic instruments.

  11. Novel multi-aperture approach for miniaturized imaging systems

    NASA Astrophysics Data System (ADS)

    Wippermann, F. C.; Brückner, A.; Oberdörster, A.; Reimann, A.

    2016-03-01

    The vast majority of cameras and imaging sensors relies on the identical single aperture optics principle with the human eye as natural antetype. Multi-aperture approaches - in natural systems so called compound eyes and in technology often referred to as array-cameras have advantages in terms of miniaturization, simplicity of the optics and additional features such as depth information and refocusing enabled by the computational manipulation of the systeḿs raw image data. The proposed imaging principle is based on a multitude of imaging channels transmitting different parts of the entire field of view. Adapted image processing algorithms are employed for the generation of the overall image by the stitching of the images of the different channels. The restriction of the individual channeĺs field of view leads to a less complex optical system targeting reduced fabrication cost. Due to a novel, linear morphology of the array camera setup, depth mapping with improved resolution can be achieved. We introduce a novel concept for miniaturized array-cameras with several mega pixel resolution targeting high volume applications in mobile and automotive imaging with improved depth mapping and explain design and fabrication aspects.

  12. Distributed Apertures in Laminar Flow Laser Turrets.

    DTIC Science & Technology

    1981-09-01

    t r by Oe" Vmbat) High Energy Lasers Aperture Arrays Laser Turrets Aero-Optics 20. Aeft*ACT (CieOuu en revere siea of 400arind OE#1 by.*f Week.1a...ciation to Mom and Dad for their love and resolve which encourages the author to fulfill his potential. 11 I. INTRODUCTION Propagation of laser energy ...is the true measure of the "goodness" of a system of one or more apertures. PIB FPIB A (11) Power projected is found by use of the relation for the on

  13. PDII- Additional discussion of the dynamic aperture

    SciTech Connect

    Norman M. Gelfand

    2002-07-23

    This note is in the nature of an addition to the dynamic aperture calculations found in the report on the Proton Driver, FERMILAB-TM-2169. A extensive discussion of the Proton Driver lattice, as well as the nomenclature used to describe it can be found in TM-2169. Basically the proposed lattice is a racetrack design with the two arcs joined by two long straight sections. The straight sections are dispersion free. Tracking studies were undertaken with the objective of computing the dynamic aperture for the lattice and some of the results have been incorporated into TM-2169. This note is a more extensive report of those calculations.

  14. Synthetic aperture radar capabilities in development

    SciTech Connect

    Miller, M.

    1994-11-15

    The Imaging and Detection Program (IDP) within the Laser Program is currently developing an X-band Synthetic Aperture Radar (SAR) to support the Joint US/UK Radar Ocean Imaging Program. The radar system will be mounted in the program`s Airborne Experimental Test-Bed (AETB), where the initial mission is to image ocean surfaces and better understand the physics of low grazing angle backscatter. The Synthetic Aperture Radar presentation will discuss its overall functionality and a brief discussion on the AETB`s capabilities. Vital subsystems including radar, computer, navigation, antenna stabilization, and SAR focusing algorithms will be examined in more detail.

  15. Comparison of calculated with measured dynamic aperture

    SciTech Connect

    Zimmermann, F.

    1994-06-01

    The measured dynamic aperture of the HERA proton ring and the value expected from simulation studies agree within a factor of 2. A better agreement is achieved if a realistic tune modulation is included in the simulation. The approximate threshold of tune-modulation induced diffusion can be calculated analytically. Its value is in remarkable agreement with the dynamic aperture measured. The calculation is based on parameters of resonances through order 11 which are computed using differential-algebra methods and normal-form algorithms. Modulational diffusion in conjunction with drifting machine parameters appears to be the most important transverse diffusion process.

  16. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  17. Sub-aperture stitching test of a cylindrical mirror with large aperture

    NASA Astrophysics Data System (ADS)

    Xue, Shuai; Chen, Shanyong; Shi, Feng; Lu, Jinfeng

    2016-09-01

    Cylindrical mirrors are key optics of high-end equipment of national defense and scientific research such as high energy laser weapons, synchrotron radiation system, etc. However, its surface error test technology develops slowly. As a result, its optical processing quality can not meet the requirements, and the developing of the associated equipment is hindered. Computer Generated-Hologram (CGH) is commonly utilized as null for testing cylindrical optics. However, since the fabrication process of CGH with large aperture is not sophisticated yet, the null test of cylindrical optics with large aperture is limited by the aperture of the CGH. Hence CGH null test combined with sub-aperture stitching method is proposed to break the limit of the aperture of CGH for testing cylindrical optics, and the design of CGH for testing cylindrical surfaces is analyzed. Besides, the misalignment aberration of cylindrical surfaces is different from that of the rotational symmetric surfaces since the special shape of cylindrical surfaces, and the existing stitching algorithm of rotational symmetric surfaces can not meet the requirements of stitching cylindrical surfaces. We therefore analyze the misalignment aberrations of cylindrical surfaces, and study the stitching algorithm for measuring cylindrical optics with large aperture. Finally we test a cylindrical mirror with large aperture to verify the validity of the proposed method.

  18. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  19. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  20. Spatially variant apodization for squinted synthetic aperture radar images.

    PubMed

    Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo

    2007-08-01

    Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.

  1. Interferometric Synthetic Aperture Microwave Radiometers : an Overview

    NASA Technical Reports Server (NTRS)

    Colliander, Andreas; McKague, Darren

    2011-01-01

    This paper describes 1) the progress of the work of the IEEE Geoscience and Remote Sensing Society (GRSS) Instrumentation and Future Technologies Technical Committee (IFT-TC) Microwave Radiometer Working Group and 2) an overview of the development of interferometric synthetic aperture microwave radiometers as an introduction to a dedicated session.

  2. Vowel Aperture and Syllable Segmentation in French

    ERIC Educational Resources Information Center

    Goslin, Jeremy; Frauenfelder, Ulrich H.

    2008-01-01

    The theories of Pulgram (1970) suggest that if the vowel of a French syllable is open then it will induce syllable segmentation responses that result in the syllable being closed, and vice versa. After the empirical verification that our target French-speaking population was capable of distinguishing between mid-vowel aperture, we examined the…

  3. Radiation safety considerations in proton aperture disposal.

    PubMed

    Walker, Priscilla K; Edwards, Andrew C; Das, Indra J; Johnstone, Peter A S

    2014-04-01

    Beam shaping in scattered and uniform scanned proton beam therapy (PBT) is made commonly by brass apertures. Due to proton interactions, these devices become radioactive and could pose safety issues and radiation hazards. Nearly 2,000 patient-specific devices per year are used at Indiana University Cyclotron Operations (IUCO) and IU Health Proton Therapy Center (IUHPTC); these devices require proper guidelines for disposal. IUCO practice has been to store these apertures for at least 4 mo to allow for safe transfer to recycling contractors. The devices require decay in two staged secure locations, including at least 4 mo in a separate building, at which point half are ready for disposal. At 6 mo, 20-30% of apertures require further storage. This process requires significant space and manpower and should be considered in the design process for new clinical facilities. More widespread adoption of pencil beam or spot scanning nozzles may obviate this issue, as apertures then will no longer be necessary.

  4. Demonstration of synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Buell, W.; Marechal, N.; Buck, J.; Dickinson, R.; Kozlowski, D.; Wright, T.; Beck, S.

    2005-05-01

    The spatial resolution of a conventional imaging LADAR system is constrained by the diffraction limit of the telescope aperture. The purpose of this work is to investigate Synthetic Aperture Imaging LADAR (SAIL), which employs aperture synthesis with coherent laser radar to overcome the diffraction limit and achieve fine-resolution, long range, two-dimensional imaging with modest aperture diameters. This paper details our laboratory-scale SAIL testbed, digital signal processing techniques, and image results. A number of fine-resolution, well-focused SAIL images are shown including both retro-reflecting and diffuse scattering targets. A general digital signal processing solution to the laser waveform instability problem is described and demonstrated, involving both new algorithms and hardware elements. These algorithms are primarily data-driven, without a priori knowledge of waveform and sensor position, representing a crucial step in developing a robust imaging system. These techniques perform well on waveform errors, but not on external phase errors such as turbulence or vibration. As a first step towards mitigating phase errors of this type, we have developed a balanced, quadrature phase, laser vibrometer to work in conjunction with our SAIL system to measure and compensate for relative line of sight motion between the target and transceiver. We describe this system and present a comparison of the vibrometer-measured phase error with the phase error inferred from the SAIL data.

  5. Processing for spaceborne synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Lybanon, M.

    1973-01-01

    The data handling and processing in using synthetic aperture radar as a satellite-borne earth resources remote sensor is considered. The discussion covers the nature of the problem, the theory, both conventional and potential advanced processing techniques, and a complete computer simulation. It is shown that digital processing is a real possibility and suggests some future directions for research.

  6. Dynamic aperture of the recycler ring

    SciTech Connect

    Xiao, Meiqin; Sen, Tanaji

    2000-11-15

    This report describes the dynamic aperture tracking for the Recycler Ring based on the latest modified lattice Ver 20. The purpose of the calculation is to check the optical properties of the lattice with the replacement of the high beta straight section (HB30) by a low beta straight section (LB30) for stochastic cooling.

  7. RF Performance of Membrane Aperture Shells

    NASA Technical Reports Server (NTRS)

    Flint, Eirc M.; Lindler, Jason E.; Thomas, David L.; Romanofsky, Robert

    2007-01-01

    This paper provides an overview of recent results establishing the suitability of Membrane Aperture Shell Technology (MAST) for Radio Frequency (RF) applications. These single surface shells are capable of maintaining their figure with no preload or pressurization and minimal boundary support, yet can be compactly roll stowed and passively self deploy. As such, they are a promising technology for enabling a future generation of RF apertures. In this paper, we review recent experimental and numerical results quantifying suitable RF performance. It is shown that candidate materials possess metallic coatings with sufficiently low surface roughness and that these materials can be efficiently fabricated into RF relevant doubly curved shapes. A numerical justification for using a reflectivity metric, as opposed to the more standard RF designer metric of skin depth, is presented and the resulting ability to use relatively thin coating thickness is experimentally validated with material sample tests. The validity of these independent film sample measurements are then confirmed through experimental results measuring RF performance for reasonable sized doubly curved apertures. Currently available best results are 22 dBi gain at 3 GHz (S-Band) for a 0.5m aperture tested in prime focus mode, 28dBi gain for the same antenna in the C-Band (4 to 6 GHz), and 36.8dBi for a smaller 0.25m antenna tested at 32 GHz in the Ka-Band. RF range test results for a segmented aperture (one possible scaling approach) are shown as well. Measured antenna system actual efficiencies (relative to the unachievable) ideal for these on axis tests are generally quite good, typically ranging from 50 to 90%.

  8. Adaptive divergence in wine yeasts and their wild relatives suggests a prominent role for introgressions and rapid evolution at non coding sites.

    PubMed

    Almeida, Pedro; Barbosa, Raquel; Bensasson, Douda; Gonçalves, Paula; Sampaio, José Paulo

    2017-02-23

    In Saccharomyces cerevisiae, the main yeast in wine fermentation, the opportunity to examine divergence at the molecular level between a domesticated lineage and its wild counterpart arose recently due to the identification of the closest relatives of wine strains, a wild population associated with Mediterranean oaks. Since genomic data is available for a considerable number of representatives belonging to both groups, we used population genomics to estimate the degree and distribution of nucleotide variation between wine yeasts and their closest wild relatives. We found widespread genome-wide divergence, particularly at non-coding sites, which, together with above average divergence in trans-acting DNA binding proteins, may suggest an important role for divergence at the level of transcriptional regulation. Nine outlier regions putatively under strong divergent selection were highlighted by a genome wide scan under stringent conditions. Several cases of introgressions originating in the sibling species S. paradoxus, were also identified in the Mediterranean oak population. FFZ1 and SSU1, mostly known for conferring sulphite resistance in wine yeasts, were among the introgressed genes, although not fixed. Because the introgressions detected in our study are not found in wine strains, we hypothesise that ongoing divergent ecological selection segregates the two forms between the different niches. Together, our results provide a first insight into the extent and kind of divergence between wine yeasts and their closest wild relatives. This article is protected by copyright. All rights reserved.

  9. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  10. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  11. Fractal characteristics of fracture roughness and aperture data

    SciTech Connect

    Kumar, S.; Bodvarsson, G.S.; Boernge, J.

    1991-05-01

    In this study mathematical expressions are developed for the characteristics of apertures between rough surfaces. It has shown that the correlation between the opposite surfaces influences the aperture properties and different models are presented for these different surface correlations. Fracture and apertures profiles measured from intact fractures are evaluated and it is found that they qualitatively follow the mathematically predicted trends.

  12. Dual aperture dipole magnet with second harmonic component

    DOEpatents

    Praeg, W.F.

    1983-08-31

    An improved dual aperture dipole electromagnet includes a second-harmonic frequency magnetic guide field winding which surrounds first harmonic frequency magnetic guide field windings associated with each aperture. The second harmonic winding and the first harmonic windings cooperate to produce resultant magnetic waveforms in the apertures which have extended acceleration and shortened reset portions of electromagnet operation.

  13. Dual aperture dipole magnet with second harmonic component

    DOEpatents

    Praeg, Walter F.

    1985-01-01

    An improved dual aperture dipole electromagnet includes a second-harmonic frequency magnetic guide field winding which surrounds first harmonic frequency magnetic guide field windings associated with each aperture. The second harmonic winding and the first harmonic windings cooperate to produce resultant magnetic waveforms in the apertures which have extended acceleration and shortened reset portions of electromagnet operation.

  14. Vacuum aperture isolator for retroreflection from laser-irradiated target

    DOEpatents

    Benjamin, Robert F.; Mitchell, Kenneth B.

    1980-01-01

    The disclosure is directed to a vacuum aperture isolator for retroreflection of a laser-irradiated target. Within a vacuum chamber are disposed a beam focusing element, a disc having an aperture and a recollimating element. The edge of the focused beam impinges on the edge of the aperture to produce a plasma which refracts any retroreflected light from the laser's target.

  15. Sharing code

    PubMed Central

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing. PMID:25165519

  16. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    PubMed

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  17. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    SciTech Connect

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  18. Large aperture compound lenses made of lithium

    NASA Astrophysics Data System (ADS)

    Cremer, J. T.; Piestrup, M. A.; Beguiristain, H. R.; Gary, C. K.; Pantell, R. H.

    2003-04-01

    We have measured the intensity profile and transmission of x rays focused by a series of biconcave parabolic unit lenses fabricated in lithium. For specified focal length and photon energy lithium compound refractive lenses (CRL) have a larger transmission, aperture size, and gain compared to aluminum, kapton, and beryllium CRLs. The lithium compound refractive lens was composed of 335 biconcave, parabolic unit lenses each with an on-axis radius of curvature of 0.95 mm. Two-dimensional focusing was obtained at 8.0 keV with a focal length of 95 cm. The effective aperture of the CRL was measured to be 1030 μm with on-axis (peak) transmissions of 27% and an on-axis intensity gain of 18.9.

  19. Compact high precision adjustable beam defining aperture

    DOEpatents

    Morton, Simon A; Dickert, Jeffrey

    2013-07-02

    The present invention provides an adjustable aperture for limiting the dimension of a beam of energy. In an exemplary embodiment, the aperture includes (1) at least one piezoelectric bender, where a fixed end of the bender is attached to a common support structure via a first attachment and where a movable end of the bender is movable in response to an actuating voltage applied to the bender and (2) at least one blade attached to the movable end of the bender via a second attachment such that the blade is capable of impinging upon the beam. In an exemplary embodiment, the beam of energy is electromagnetic radiation. In an exemplary embodiment, the beam of energy is X-rays.

  20. Performance limits for Synthetic Aperture Radar.

    SciTech Connect

    Doerry, Armin Walter

    2006-02-01

    The performance of a Synthetic Aperture Radar (SAR) system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to ''get your arms around'' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics, no matter how bright the engineer tasked to generate a system design. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall SAR system. For example, there are definite optimum frequency bands that depend on weather conditions and range, and minimum radar PRF for a fixed real antenna aperture dimension is independent of frequency. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the ''seek time''.

  1. Design of large aperture focal plane shutter

    NASA Astrophysics Data System (ADS)

    Hu, Jia-wen; Ma, Wen-li; Huang, Jin-long

    2012-09-01

    To satisfy the requirement of large telescope, a large aperture focal plane shutter with aperture size of φ200mm was researched and designed to realize, which could be started and stopped in a relative short time with precise position, and also the blades could open and close at the same time at any orientation. Timing-belts and stepper motors were adopted as the drive mechanism. Velocity and position of the stepper motors were controlled by the PWM pulse generated by DSP. Exponential curve is applied to control the velocity of the stepper motors to make the shutter start and stop in a short time. The closing/open time of shutter is 0.2s, which meets the performance requirements of large telescope properly.

  2. Polarization-sensitive interferometric synthetic aperture microscopy

    PubMed Central

    South, Fredrick A.; Liu, Yuan-Zhi; Xu, Yang; Shemonski, Nathan D.; Carney, P. Scott; Boppart, Stephen A.

    2015-01-01

    Three-dimensional optical microscopy suffers from the well-known compromise between transverse resolution and depth-of-field. This is true for both structural imaging methods and their functional extensions. Interferometric synthetic aperture microscopy (ISAM) is a solution to the 3D coherent microscopy inverse problem that provides depth-independent transverse resolution. We demonstrate the extension of ISAM to polarization sensitive imaging, termed polarization-sensitive interferometric synthetic aperture microscopy (PS-ISAM). This technique is the first functionalization of the ISAM method and provides improved depth-of-field for polarization-sensitive imaging. The basic assumptions of polarization-sensitive imaging are explored, and refocusing of birefringent structures is experimentally demonstrated. PS-ISAM enables high-resolution volumetric imaging of birefringent materials and tissue. PMID:26648593

  3. Aperture modulated, translating bed total body irradiation

    SciTech Connect

    Hussain, Amjad; Villarreal-Barajas, Jose Eduardo; Dunscombe, Peter; Brown, Derek W.

    2011-02-15

    Purpose: Total body irradiation (TBI) techniques aim to deliver a uniform radiation dose to a patient with an irregular body contour and a heterogeneous density distribution to within {+-}10% of the prescribed dose. In the current article, the authors present a novel, aperture modulated, translating bed TBI (AMTBI) technique that produces a high degree of dose uniformity throughout the entire patient. Methods: The radiation beam is dynamically shaped in two dimensions using a multileaf collimator (MLC). The irregular surface compensation algorithm in the Eclipse treatment planning system is used for fluence optimization, which is performed based on penetration depth and internal inhomogeneities. Two optimal fluence maps (AP and PA) are generated and beam apertures are created to deliver these optimal fluences. During treatment, the patient/phantom is translated on a motorized bed close to the floor (source to bed distance: 204.5 cm) under a stationary radiation beam with 0 deg. gantry angle. The bed motion and dynamic beam apertures are synchronized. Results: The AMTBI technique produces a more homogeneous dose distribution than fixed open beam translating bed TBI. In phantom studies, the dose deviation along the midline is reduced from 10% to less than 5% of the prescribed dose in the longitudinal direction. Dose to the lung is reduced by more than 15% compared to the unshielded fixed open beam technique. At the lateral body edges, the dose received from the open beam technique was 20% higher than that prescribed at umbilicus midplane. With AMTBI the dose deviation in this same region is reduced to less than 3% of the prescribed dose. Validation of the technique was performed using thermoluminescent dosimeters in a Rando phantom. Agreement between calculation and measurement was better than 3% in all cases. Conclusions: A novel, translating bed, aperture modulated TBI technique that employs dynamically shaped MLC defined beams is shown to improve dose uniformity

  4. Infrared interferometer with a scanned aperture.

    PubMed

    Edwin, R P

    1975-08-01

    A Twyman-Green interferometer operating at a 3.39-microm wavelength has been built in which the collimator aperture was scanned by a laser beam. The scanning was produced by reflecting the laser beam from a mirror supported by four piezoelectric elements and oscillated about two orthogonal axes. The radiation transmitted by the interferometer was measured by a stationary detector of small area. The complete system offers a cheap and efficient alternative to conventional ir interferometers.

  5. Synthetic aperture ladar concept for infrastructure monitoring

    NASA Astrophysics Data System (ADS)

    Turbide, Simon; Marchese, Linda; Terroux, Marc; Bergeron, Alain

    2014-10-01

    Long range surveillance of infrastructure is a critical need in numerous security applications, both civilian and military. Synthetic aperture radar (SAR) continues to provide high resolution radar images in all weather conditions from remote distances. As well, Interferometric SAR (InSAR) and Differential Interferometric SAR (D-InSAR) have become powerful tools adding high resolution elevation and change detection measurements. State of the art SAR systems based on dual-use satellites are capable of providing ground resolutions of one meter; while their airborne counterparts obtain resolutions of 10 cm. D-InSAR products based on these systems could produce cm-scale vertical resolution image products. Deformation monitoring of railways, roads, buildings, cellular antennas, power structures (i.e., power lines, wind turbines, dams, or nuclear plants) would benefit from improved resolution, both in the ground plane and vertical direction. The ultimate limitation to the achievable resolution of any imaging system is its wavelength. State-of-the art SAR systems are approaching this limit. The natural extension to improve resolution is to thus decrease the wavelength, i.e. design a synthetic aperture system in a different wavelength regime. One such system offering the potential for vastly improved resolution is Synthetic Aperture Ladar (SAL). This system operates at infrared wavelengths, ten thousand times smaller than radar wavelengths. This paper presents a laboratory demonstration of a scaled-down infrastructure deformation monitoring with an Interferometric Synthetic Aperture Ladar (IFSAL) system operating at 1.5 μm. Results show sub-millimeter precision on the deformation applied to the target.

  6. Performance Limits for Synthetic Aperture Radar

    DTIC Science & Technology

    2006-02-01

    LIMITATION OF ABSTRACT Same as Report ( SAR ) 18. NUMBER OF PAGES 70 19a. NAME OF RESPONSIBLE PERSON a. REPORT unclassified b. ABSTRACT...second edition Armin W. Doerry SAR Applications Department Sandia National Laboratories PO Box 5800 Albuquerque, NM 87185-1330...ABSTRACT The performance of a Synthetic Aperture Radar ( SAR ) system depends on a variety of factors, many which are interdependent in some manner. It is

  7. Substrate effect on aperture resonances in a thin metal film.

    PubMed

    Kang, J H; Choe, Jong-Ho; Kim, D S; Park, Q-Han

    2009-08-31

    We present a simple theoretical model to study the effect of a substrate on the resonance of an aperture in a thin metal film. The transmitted energy through an aperture is shown to be governed by the coupling of aperture waveguide mode to the incoming and the outgoing electromagnetic waves into the substrate region. Aperture resonance in the energy transmission thus depends critically on the refractive index of a substrate. We explain the substrate effect on aperture resonance in terms of destructive interference among evanescent modes or impedance mismatch. Our model shows an excellent agreement with a rigorous FDTD calculation and is consistent with previous experimental observations.

  8. Synthetic aperture radar processing with tiered subapertures

    SciTech Connect

    Doerry, A.W.

    1994-06-01

    Synthetic Aperture Radar (SAR) is used to form images that are maps of radar reflectivity of some scene of interest, from range soundings taken over some spatial aperture. Additionally, the range soundings are typically synthesized from a sampled frequency aperture. Efficient processing of the collected data necessitates using efficient digital signal processing techniques such as vector multiplies and fast implementations of the Discrete Fourier Transform. Inherent in image formation algorithms that use these is a trade-off between the size of the scene that can be acceptably imaged, and the resolution with which the image can be made. These limits arise from migration errors and spatially variant phase errors, and different algorithms mitigate these to varying degrees. Two fairly successful algorithms for airborne SARs are Polar Format processing, and Overlapped Subaperture (OSA) processing. This report introduces and summarizes the analysis of generalized Tiered Subaperture (TSA) techniques that are a superset of both Polar Format processing and OSA processing. It is shown how tiers of subapertures in both azimuth and range can effectively mitigate both migration errors and spatially variant phase errors to allow virtually arbitrary scene sizes, even in a dynamic motion environment.

  9. Diffraction contrast imaging using virtual apertures.

    PubMed

    Gammer, Christoph; Burak Ozdol, V; Liebscher, Christian H; Minor, Andrew M

    2015-08-01

    Two methods on how to obtain the full diffraction information from a sample region and the associated reconstruction of images or diffraction patterns using virtual apertures are demonstrated. In a STEM-based approach, diffraction patterns are recorded for each beam position using a small probe convergence angle. Similarly, a tilt series of TEM dark-field images is acquired. The resulting datasets allow the reconstruction of either electron diffraction patterns, or bright-, dark- or annular dark-field images using virtual apertures. The experimental procedures of both methods are presented in the paper and are applied to a precipitation strengthened and creep deformed ferritic alloy with a complex microstructure. The reconstructed virtual images are compared with conventional TEM images. The major advantage is that arbitrarily shaped virtual apertures generated with image processing software can be designed without facing any physical limitations. In addition, any virtual detector that is specifically designed according to the underlying crystal structure can be created to optimize image contrast.

  10. Restoring Aperture Profile At Sample Plane

    SciTech Connect

    Jackson, J L; Hackel, R P; Lungershausen, A W

    2003-08-03

    Off-line conditioning of full-size optics for the National Ignition Facility required a beam delivery system to allow conditioning lasers to rapidly raster scan samples while achieving several technical goals. The main purpose of the optical system designed was to reconstruct at the sample plane the flat beam profile found at the laser aperture with significant reductions in beam wander to improve scan times. Another design goal was the ability to vary the beam size at the sample to scan at different fluences while utilizing all of the laser power and minimizing processing time. An optical solution was developed using commercial off-the-shelf lenses. The system incorporates a six meter relay telescope and two sets of focusing optics. The spacing of the focusing optics is changed to allow the fluence on the sample to vary from 2 to 14 Joules per square centimeter in discrete steps. More importantly, these optics use the special properties of image relaying to image the aperture plane onto the sample to form a pupil relay with a beam profile corresponding almost exactly to the flat profile found at the aperture. A flat beam profile speeds scanning by providing a uniform intensity across a larger area on the sample. The relayed pupil plane is more stable with regards to jitter and beam wander. Image relaying also reduces other perturbations from diffraction, scatter, and focus conditions. Image relaying, laser conditioning, and the optical system designed to accomplish the stated goals are discussed.

  11. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  12. High resolution beamforming for small aperture arrays

    NASA Astrophysics Data System (ADS)

    Clark, Chris; Null, Tom; Wagstaff, Ronald A.

    2003-04-01

    Achieving fine resolution bearing estimates for multiple sources using acoustic arrays with small apertures, in number of wavelengths, is a difficult challenge. It requires both large signal-to-noise ratio (SNR) gains and very narrow beam responses. High resolution beamforming for small aperture arrays is accomplished by exploiting acoustical fluctuations. Acoustical fluctuations in the atmosphere are caused by wind turbulence along the propagation path, air turbulence at the sensor, source/receiver motion, unsteady source level, and fine scale temperature variations. Similar environmental and source dependent phenomena cause fluctuations in other propagation media, e.g., undersea, optics, infrared. Amplitude fluctuations are exploited to deconvolve the beam response functions from the beamformed data of small arrays to achieve high spatial resolution, i.e., fine bearing resolution, and substantial SNR gain. Results are presented for a six microphone low-frequency array with an aperture of less than three wavelengths. [Work supported by U.S. Army Armament Research Development and Engineering Center.

  13. Filled aperture concepts for the Terrestrial Planet Finder

    NASA Astrophysics Data System (ADS)

    Ridgway, Stephen T.

    2003-02-01

    Filled aperture telescopes can deliver a real, high Strehl image which is well suited for discrimination of faint planets in the vicinity of bright stars and against an extended exo-zodiacal light. A filled aperture offers a rich variety of PSF control and diffraction suppression techniques. Filled apertures are under consideration for a wide spectral range, including visible and thermal-IR, each of which offers a significant selection of biomarker molecular bands. A filled aperture visible TPF may be simpler in several respects than a thermal-IR nuller. The required aperture size (or baseline) is much smaller, and no cryogenic systems are required. A filled aperture TPF would look and act like a normal telescope - vendors and users alike would be comfortable with its design and operation. Filled aperture telescopes pose significant challenges in production of large primary mirrors, and in very stringent wavefront requirements. Stability of the wavefront control, and hence of the PSF, is a major issue for filled aperture systems. Several groups have concluded that these and other issues can be resolved, and that filled aperture options are competitive for a TPF precursor and/or for the full TPF mission. Ball, Boeing-SVS and TRW have recently returned architecture reviews on filled aperture TPF concepts. In this paper, I will review some of the major considerations underlying these filled aperture concepts, and suggest key issues in a TPF Buyers Guide.

  14. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Astrophysics Data System (ADS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-08-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multi-wavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  15. A novel multi slit X-ray backscatter camera based on synthetic aperture focusing

    NASA Astrophysics Data System (ADS)

    Wieder, Frank; Ewert, Uwe; Vogel, Justus; Jaenisch, Gerd-Rüdiger; Bellon, Carsten

    2017-02-01

    A special slit collimator was developed earlier for fast acquisition of X-ray back scatter images. The design was based on a twisted slit design (ruled surfaces) in a Tungsten block to acquire backscatter images. The comparison with alternative techniques as flying spot and coded aperture pin hole technique could not prove the expected higher contrast sensitivity. In analogy to the coded aperture technique, a novel multi slit camera was designed and tested. Several twisted slits were parallelly arranged in a metal block. The CAD design of different multi-slit cameras was evaluated and optimized by the computer simulation packages aRTist and McRay. The camera projects a set of equal images, one per slit, to the digital detector array, which are overlaying each other. Afterwards, the aperture is corrected based on a deconvolution algorithm to focus the overlaying projections into a single representation of the object. Furthermore, a correction of the geometrical distortions due to the slit geometry is performed. The expected increase of the contrast-to-noise ratio is proportional to the square root of the number of parallel slits in the camera. However, additional noise has to be considered originating from the deconvolution operation. The slit design, functional principle, and the expected limits of this technique are discussed.

  16. The sensitivity of synthetic aperture radiometers for remote sensing applications from space

    NASA Technical Reports Server (NTRS)

    Le Vine, David M.

    1990-01-01

    Aperture synthesis offers a means of realizing the full potential microwave remote sensing from space by helping to overcome the limitations set by antenna size. The result is a potentially lighter, more adaptable structure for applications in space. However, because the physical collecting area is reduced, the signal-to-noise ratio is also reduced and may adversely affect the radiometric sensitivity. Sensitivity is an especially critical issue for measurements from low earth orbit because the motion of the platform (about 7 km/s) limits the integration time available for forming an image. The purpose of this paper is to develop expressions for the sensitivity of remote sensing systems which use aperture synthesis. The objective is to develop basic equations general enough to be used to obtain the sensitivity of the several variations of aperture synthesis which have been proposed for sensors in space. The conventional microwave imager (a scanning total power radiometer) is treated as a special case, and the paper concludes with a comparison of three synthetic aperture configurations with the conventional imager.

  17. Coherent synthetic imaging using multi-aperture scanning Fourier ptychography

    NASA Astrophysics Data System (ADS)

    Xie, Zongliang; Ma, Haotong; Qi, Bo; Ren, Ge

    2016-10-01

    The high resolution is what the synthetic aperture technique quests for. In this paper, we propose an approach of coherent synthetic imaging with sparse aperture systems using multi-aperture scanning Fourier ptychography algorithm, which can further improve the resolution of sparse aperture systems. The reported technique first acquires a series of raw images by scanning a sparse aperture system and then the captured images are used to synthesize a larger spectrum in the frequency domain using aperture-scanning Fourier ptychography algorithm. The system's traveling circumvent its diffraction limit so that a super-resolution image can be obtained. Numerical simulation demonstrates the validity. The technique proposed in this paper may find wide applications in synthetic aperture imaging and astronomy.

  18. Imaging performance of annular apertures. II - Line spread functions

    NASA Technical Reports Server (NTRS)

    Tschunko, H. F. A.

    1978-01-01

    Line images formed by aberration-free optical systems with annular apertures are investigated in the whole range of central obstruction ratios. Annular apertures form lines images with central and side line groups. The number of lines in each line group is given by the ratio of the outer diameter of the annular aperture divided by the width of the annulus. The theoretical energy fraction of 0.889 in the central line of the image formed by an unobstructed aperture increases for centrally obstructed apertures to 0.932 for the central line group. Energy fractions for the central and side line groups are practically constant for all obstruction ratios and for each line group. The illumination of rectangular secondary apertures of various length/width ratios by apertures of various obstruction ratios is discussed.

  19. Adaptive liquid crystal iris

    NASA Astrophysics Data System (ADS)

    Zhou, Zuowei; Ren, Hongwen; Nah, Changwoon

    2014-09-01

    We report an adaptive iris using a twisted nematic liquid crystal (TNLC) and a hole-patterned electrode. When an external voltage is applied to the TNLC, the directors of the LC near the edge of the hole are unwound first. Increasing the voltage can continuously unwind the LC toward the center. When the TNLC is sandwiched between two polarizers, it exhibits an iris-like character. Either a normal mode or a reverse mode can be obtained depending on the orientations of the transmission axes of the two polarizers. In contrast to liquid irises, the aperture of the LC iris can be closed completely. Moreover, it has the advantages of large variability of the aperture diameter, good stability, and low power consumption. Applications of the device for controlling the laser energy and correcting optical aberration are foreseeable.

  20. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  1. Edge equilibrium code for tokamaks

    SciTech Connect

    Li, Xujing; Drozdov, Vladimir V.

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  2. Experimental instrumentation system for the Phased Array Mirror Extendible Large Aperture (PAMELA) test program

    NASA Technical Reports Server (NTRS)

    Boykin, William H., Jr.

    1993-01-01

    Adaptive optics are used in telescopes for both viewing objects with minimum distortion and for transmitting laser beams with minimum beam divergence and dance. In order to test concepts on a smaller scale, NASA MSFC is in the process of setting up an adaptive optics test facility with precision (fraction of wavelengths) measurement equipment. The initial system under test is the adaptive optical telescope called PAMELA (Phased Array Mirror Extendible Large Aperture). Goals of this test are: assessment of test hardware specifications for PAMELA application and the determination of the sensitivities of instruments for measuring PAMELA (and other adaptive optical telescopes) imperfections; evaluation of the PAMELA system integration effort and test progress and recommended actions to enhance these activities; and development of concepts and prototypes of experimental apparatuses for PAMELA.

  3. Matrix method to find a new set of Zernike coefficients from an original set when the aperture radius is changed.

    PubMed

    Campbell, Charles E

    2003-02-01

    A matrix method is developed that allows a new set of Zernike coefficients that describe a surface or wave front appropriate for a new aperture size to be found from an original set of Zernike coefficients that describe the same surface or wave front but use a different aperture size. The new set of coefficients, arranged as elements of a vector, is formed by multiplying the original set of coefficients, also arranged as elements of a vector, by a conversion matrix formed from powers of the ratio of the new to the original aperture and elements of a matrix that forms the weighting coefficients of the radial Zernike polynomial functions. In developing the method, a new matrix method for expressing Zernike polynomial functions is introduced and used. An algorithm is given for creating the conversion matrix along with computer code to implement the algorithm.

  4. The physics of light transmission through subwavelength apertures and aperture arrays

    NASA Astrophysics Data System (ADS)

    Weiner, J.

    2009-06-01

    The passage of light through apertures much smaller than the wavelength of the light has proved to be a surprisingly subtle phenomenon. This report describes how modern developments in nanofabrication, coherent light sources and numerical vector field simulations have led to the upending of early predictions from scalar diffraction theory and classical electrodynamics. Optical response of real materials to incident coherent radiation at petahertz frequencies leads to unexpected consequences for transmission and extinction of light through subwavelength aperture arrays. This paper is a report on progress in our understanding of this phenomenon over the past decade.

  5. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  6. Fluidic adaptive lens of transformable lens type

    NASA Astrophysics Data System (ADS)

    Zhang, De-Ying; Justis, Nicole; Lo, Yu-Hwa

    2004-05-01

    Fluidic adaptive lenses with a transformable lens type were demonstrated. By adjusting the fluidic pressure, not only can the lens properties, such as the focal distance and numerical aperture, be tuned dynamically but also different lens types, such as planoconvex, planoconcave, biconvex, biconcave, positive meniscus, and negative meniscus lenses, can be formed. The shortest focal length for a 20 mm aperture adaptive lens is 14.3 mm when the device is transformed into a positive lens, and -6.3 mm when transformed into a negative lens. The maximum resolution of the fluidic lens is better than 40 line pairs/mm.

  7. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  8. The Configurable Aperture Space Telescope (CAST)

    NASA Astrophysics Data System (ADS)

    Ennico, Kimberly; Bendek, Eduardo A.; Lynch, Dana H.; Vassigh, Kenny K.; Young, Zion

    2016-07-01

    The Configurable Aperture Space Telescope, CAST, is a concept that provides access to a UV/visible-infrared wavelength sub-arcsecond imaging platform from space, something that will be in high demand after the retirement of the astronomy workhorse, the 2.4 meter diameter Hubble Space Telescope. CAST allows building large aperture telescopes based on small, compatible and low-cost segments mounted on autonomous cube-sized satellites. The concept merges existing technology (segmented telescope architecture) with emerging technology (smartly interconnected modular spacecraft, active optics, deployable structures). Requiring identical mirror segments, CAST's optical design is a spherical primary and secondary mirror telescope with modular multi-mirror correctors placed at the system focal plane. The design enables wide fields of view, up to as much as three degrees, while maintaining aperture growth and image performance requirements. We present a point design for the CAST concept based on a 0.6 meter diameter (3 x 3 segments) growing to a 2.6 meter diameter (13 x 13 segments) primary, with a fixed Rp=13,000 and Rs=8,750 mm curvature, f/22.4 and f/5.6, respectively. Its diffraction limited design uses a two arcminute field of view corrector with a 7.4 arcsec/mm platescale, and can support a range of platescales as fine as 0.01 arcsec/mm. Our paper summarizes CAST, presents a strawman optical design and requirements for the underlying modular spacecraft, highlights design flexibilities, and illustrates applications enabled by this new method in building space observatories.

  9. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  10. Synthetic aperture radar autofocus via semidefinite relaxation.

    PubMed

    Liu, Kuang-Hung; Wiesel, Ami; Munson, David C

    2013-06-01

    The autofocus problem in synthetic aperture radar imaging amounts to estimating unknown phase errors caused by unknown platform or target motion. At the heart of three state-of-the-art autofocus algorithms, namely, phase gradient autofocus, multichannel autofocus (MCA), and Fourier-domain multichannel autofocus (FMCA), is the solution of a constant modulus quadratic program (CMQP). Currently, these algorithms solve a CMQP by using an eigenvalue relaxation approach. We propose an alternative relaxation approach based on semidefinite programming, which has recently attracted considerable attention in other signal processing problems. Experimental results show that our proposed methods provide promising performance improvements for MCA and FMCA through an increase in computational complexity.

  11. Synthetic aperture radar in geosynchronous orbit

    NASA Technical Reports Server (NTRS)

    Tomiyasu, K.

    1978-01-01

    Radar images of the earth were taken with a synthetic aperture radar (SAR) from geosynchronous orbital ranges by utilizing satellite motion relative to a geostationary position. A suitable satellite motion was obtained by having an orbit plane inclined relative to the equatorial plane and by having an eccentric orbit. Potential applications of these SAR images are topography, water resource management and soil moisture determination. Preliminary calculations show that the United States can be mapped with 100 m resolution cells in about 4 hours. With the use of microwave signals the mapping can be performed day or night, through clouds and during adverse weather.

  12. Combined synthetic aperture radar/Landsat imagery

    NASA Technical Reports Server (NTRS)

    Marque, R. E.; Maurer, H. E.

    1978-01-01

    This paper presents the results of investigations into merging synthetic aperture radar (SAR) and Landsat multispectral scanner (MSS) images using optical and digital merging techniques. The unique characteristics of airborne and orbital SAR and Landsat MSS imagery are discussed. The case for merging the imagery is presented and tradeoffs between optical and digital merging techniques explored. Examples of Landsat and airborne SAR imagery are used to illustrate optical and digital merging. Analysis of the merged digital imagery illustrates the improved interpretability resulting from combining the outputs from the two sensor systems.

  13. Aperture correction for a sphere interferometer

    NASA Astrophysics Data System (ADS)

    Arnold Nicolaus, R.; Bönsch, Gerhard

    2009-12-01

    Considerations have been made to derive a correction for the diameter measurements of a sphere by means of a special sphere interferometer. This correction is caused by the finite diameter of the light source acting as the entrance 'pinhole' aperture in the light collimating system. The finite diameter has the effect that the wave which is incident on the sphere is a superposition of spherical waves which are slightly inclined with respect to each other. The resulting correction is essential for high accuracy dimensional measurements of silicon spheres to determine the Avogadro constant—a new determination of which is a contribution to a new definition of the kilogram.

  14. Synthetic aperture radar for disaster monitoring

    NASA Astrophysics Data System (ADS)

    Dunkel, R.; Saddler, R.; Doerry, A. W.

    2011-06-01

    Synthetic Aperture Radar (SAR) is well known to afford imaging in darkness and through clouds, smoke, and other obscurants. As such, it is particularly useful for mapping and monitoring a variety of natural and man-made disasters. A portfolio of SAR image examples has been collected using General Atomics Aeronautical Systems, Inc.'s (GA-ASI's) Lynx® family of Ku-Band SAR systems, flown on both operational and test-bed aircraft. Images are provided that include scenes of flooding, ice jams in North Dakota, agricultural field fires in southern California, and ocean oil slicks from seeps off the coast of southern California.

  15. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  16. Accurate laser guide star wavefront sensor simulation for the E-ELT first light adaptive optics module

    NASA Astrophysics Data System (ADS)

    Patti, Mauro; Schreiber, Laura; Arcidiacono, Carmelo; Bregoli, Giovanni; Ciliegi, Paolo; Diolaiti, Emiliano; Esposito, Simone; Feautrier, Philippe; Lombini, Matteo

    2016-07-01

    MAORY will be the multi-conjugate adaptive optics module for the E-ELT first light. The baseline is to operate wavefront sensing using 6 Sodium Laser Guide Stars and 3 Natural Guide Stars to solve intrinsic limitations of artificial beacons and to mitigate the impact of the sodium layer structure and variability. In particular, some critical components of MAORY require to be designed and dimensioned in order to reduce the spurious effects arising from the Sodium Layer density distribution and variation. The MAORY end-to-end simulation code has been designed to accurately model the Laser Guide Star image in the Shack-Hartmann wavefront sensor sub-apertures and to allow sodium profile temporal evolution. The fidelity with which the simulation code translates the sodium profiles in Laser Guide Star images at the wavefront sensor focal plane has been verified using a laboratory Prototype.

  17. NPT: a large-aperture telescope for high dynamic range astronomy

    NASA Astrophysics Data System (ADS)

    Joseph, Robert D.; Kuhn, Jeff R.; Tokunaga, Alan T.; Coulter, Roy; Ftaclas, Christo; Graves, J. Elon; Hull, Charles L.; Jewitt, D.; Mickey, Donald L.; Moretto, Gilberto; Neill, Doug; Northcott, Malcolm J.; Roddier, Claude A.; Roddier, Francois J.; Siegmund, Walter A.; Owen, Tobias C.

    2000-06-01

    All existing night-time astronomical telescopes, regardless of aperture, are blind to an important part of the universe - the region around bright objects. Technology now exist to build an unobscured 6.5 m aperture telescope which will attain coronagraphic sensitivity heretofore unachieved. A working group hosted by the University of Hawaii Institute for Astronomy has developed plans for a New Planetary Telescope which will permit astronomical observations which have never before ben possible. In its narrow-field mode the off-axis optical design, combined with adaptive optics, provides superb coronagraphic capabilities, and a very low thermal IR background. These make it ideal for studies of extra-solar planets and circumstellar discs, as well as for general IR astronomy. In its wide-field mode the NPT provides a 2 degree diameter field for surveys of Kuiper Belt Objects and Near-Earth Objects, surveys central to current intellectual interests in solar system astronomy.

  18. Towards laser guide stars for multi-aperture interferometry: an application to the hypertelescope

    NASA Astrophysics Data System (ADS)

    Nuñez, Paul D.; Labeyrie, Antoine; Riaud, Pierre

    2014-04-01

    Optical interferometry has been successful at achieving milliarcsecond resolution on bright stars. Imaging performance can improve greatly by increasing the number of baselines, which has motivated proposals to build large (˜100 m) optical interferometers with tens to hundreds of telescopes. It is also desirable to adaptively correct atmospheric turbulence to obtain direct phased images of astrophysical sources. When a natural guide star is not available, we investigate the feasibility of using a modified laser-guide-star technique that is suitable for large diluted apertures. The method consists of using subsets of apertures to create an array of artificial stars in the sodium layer and collecting back-scattered light with the same subapertures. We present some numerical and laboratory simulations that quantify the requirements and sensitivity of the technique.

  19. Synthetic aperture radar signal processing on the MPP

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Seiler, E. J.

    1987-01-01

    Satellite-borne Synthetic Aperture Radars (SAR) sense areas of several thousand square kilometers in seconds and transmit phase history signal data several tens of megabits per second. The Shuttle Imaging Radar-B (SIR-B) has a variable swath of 20 to 50 km and acquired data over 100 kms along track in about 13 seconds. With the simplification of separability of the reference function, the processing still requires considerable resources; high speed I/O, large memory and fast computation. Processing systems with regular hardware take hours to process one Seasat image and about one hour for a SIR-B image. Bringing this processing time closer to acquisition times requires an end-to-end system solution. For the purpose of demonstration, software was implemented on the present Massively Parallel Processor (MPP) configuration for processing Seasat and SIR-B data. The software takes advantage of the high processing speed offered by the MPP, the large Staging Buffer, and the high speed I/O between the MPP array unit and the Staging Buffer. It was found that with unoptimized Parallel Pascal code, the processing time on the MPP for a 4096 x 4096 sample subset of signal data ranges between 18 and 30.2 seconds depending on options.

  20. Experiment in Onboard Synthetic Aperture Radar Data Processing

    NASA Technical Reports Server (NTRS)

    Holland, Matthew

    2011-01-01

    Single event upsets (SEUs) are a threat to any computing system running on hardware that has not been physically radiation hardened. In addition to mandating the use of performance-limited, hardened heritage equipment, prior techniques for dealing with the SEU problem often involved hardware-based error detection and correction (EDAC). With limited computing resources, software- based EDAC, or any more elaborate recovery methods, were often not feasible. Synthetic aperture radars (SARs), when operated in the space environment, are interesting due to their relevance to NASAs objectives, but problematic in the sense of producing prodigious amounts of raw data. Prior implementations of the SAR data processing algorithm have been too slow, too computationally intensive, and require too much application memory for onboard execution to be a realistic option when using the type of heritage processing technology described above. This standard C-language implementation of SAR data processing is distributed over many cores of a Tilera Multicore Processor, and employs novel Radiation Hardening by Software (RHBS) techniques designed to protect the component processes (one per core) and their shared application memory from the sort of SEUs expected in the space environment. The source code includes calls to Tilera APIs, and a specialized Tilera compiler is required to produce a Tilera executable. The compiled application reads input data describing the position and orientation of a radar platform, as well as its radar-burst data, over time and writes out processed data in a form that is useful for analysis of the radar observations.

  1. KAOS: kilo-aperture optical spectrograph

    NASA Astrophysics Data System (ADS)

    Barden, Samuel C.; Dey, Arjun; Boyle, Brian; Glazebrook, Karl

    2004-09-01

    A design is described for a potential new facility capable of taking detailed spectroscopy of millions of objects in the Universe to explore the complexity of the Universe and to answer fundamental questions relating to the equation of state of dark energy and to how the Milky Way galaxy formed. The specific design described is envisioned for implementation on the Gemini 8-meter telescopes. It utilizes a 1.5° field of view and samples that field with up to ~5000 apertures. This Kilo-Aperture Optical Spectrograph (KAOS) is mounted at prime focus with a 4-element corrector, atmospheric dispersion compensator (ADC), and an Echidna-style fiber optic positioner. The ADC doubles as a wobble plate, allowing fast guiding that cancels out the wind buffeting of the telescope. The fibers, which can be reconfigured in less than 10 minutes, feed to an array of 12 spectrographs located in the pier of the telescope. The spectrographs are capable of provided spectral resolving powers of a few thousand up to about 40,000.

  2. Very large aperture optics for space applications

    NASA Astrophysics Data System (ADS)

    Horwath, T. G.; Smith, J. P.; Johnson, M. T.

    1994-09-01

    A new type of space optics technology is presented which promises the realization of very large apertures (tens of meters), while packagable into lightweight, small volume containers compatible with conventional launch vehicles. This technology makes use of thin foils of circular shape which are uniformly mass loaded around the perimeter. Once unfurled and set into rapid rotation about the transversal axis, the foil is stretched into a perfectly flat plane by the centrifugal forces acting on the peripheral masses. The simplest applications of this novel technology are optically flat reflectors, using metallized foils of Mylar, Kevlar, or Kapton. Other more complex optical components can be realized by use of binary optics techniques, such as depositing holograms by selective local microscale removal of the reflective surface. Electrostatic techniques, in conjunction with an auxiliary foil, under local, distributed real-time control of the optical parameters, allow implementation of functions like beam steering and focal length adjustments. Gas pressurization allows stronger curvatures and thus smaller focal ratios for non-imaging applications. Limits on aperture are imposed primarily by manufacturing capabilities. Applications of such large optics in space are numerous. They range from military, such as space based lasers, to the civilian ones of power beaming, solar energy collection, and astronomy. This paper examines this simple and innovative concept in detail, discusses deployment and attitude control issues and presents approaches for realization.

  3. The SKA New Instrumentation: Aperture Arrays

    NASA Astrophysics Data System (ADS)

    van Ardenne, A.; Faulkner, A. J.; de Vaate, J. G. bij

    The radio frequency window of the Square Kilometre Array is planned to cover the wavelength regime from cm up to a few meters. For this range to be optimally covered, different antenna concepts are considered enabling many science cases. At the lowest frequency range, up to a few GHz, it is expected that multi-beam techniques will be used, increasing the effective field-of-view to a level that allows very efficient detailed and sensitive exploration of the complete sky. Although sparse narrow band phased arrays are as old as radio astronomy, multi-octave sparse and dense arrays now being considered for the SKA, requiring new low noise design, signal processing and calibration techniques. These new array techniques have already been successfully introduced as phased array feeds upgrading existing reflecting telescopes and for new telescopes to enhance the aperture efficiency as well as greatly increasing their field-of-view (van Ardenne et al., Proc IEEE 97(8):2009) by [1]. Aperture arrays use phased arrays without any additional reflectors; the phased array elements are small enough to see most of the sky intrinsically offering a large field of view.

  4. Integrated feeds for electronically reconfigurable apertures

    NASA Astrophysics Data System (ADS)

    Nicholls, Jeffrey Grant

    With the increasing ubiquity of wireless technology, the need for lower-profile, electronically reconfigurable, highly-directive beam-steering antennas is increasing. This thesis proposes a new electronic beam-steering antenna architecture which combines the full-space beam-steering properties of reflectarrays and transmitarrays with the low-profile feeding characteristics of leaky-wave antennas. Two designs are developed: an integrated feed reflectarray and an integrated feed transmitarray, both of which integrate a leaky-wave feed directly next to the reconfigurable aperture itself. The integrated feed transmitarray proved to be the better architecture due to its simpler design and better performance. A 6-by-6 element array was fabricated and experimentally verified, and full-space (both azimuth and elevation) beam-steering was demonstrated at angles up to 45 degrees off broadside. In addition to the reduction in profile, the integrated feed design enables robust fixed control of the amplitude distribution across the aperture, a characteristic not as easily attained in typical reflectarrays/transmitarrays.

  5. New physics and applications of apertures in thin metal films

    NASA Astrophysics Data System (ADS)

    Gordon, Reuven; Al-Balushi, Ahmed A.; Kotnala, Abhay; Gelfand, Ryan F.; Wheaton, Skylar; Chen, Shuwen; Jin, Shilong

    2014-08-01

    The nanoplasmonic properties of apertures in metal films have been studied extensively; however, we have recently discovered surprising new features of this simple system with applications to super-focusing and super-scattering. Furthermore, apertures allow for optical tweezers that can hold onto particles of the order of 1 nm; I will briefly highlight our work using these apertures to study protein - small molecule interactions and protein - DNA binding.

  6. Experimental investigations of 3 mm aperture PPLN structures

    NASA Astrophysics Data System (ADS)

    Kolker, D.; Pronyushkina, A.; Boyko, A.; Kostyukova, N.; Trashkeev, S.; Nuyshkov, B.; Shur, V.

    2017-01-01

    We are reporting about investigation of domestic 3 mm aperture periodically polled lithium niobate (PPLN) structures for cascaded mid-IR OPO. Wide aperture periodically poled MgO-doped lithium niobate (LiNbO3) structures at multigrating, fan-out and multi fan-out configuration were prepared at “Labfer LTD”. Laser source based on such structures can be used for special applications. Four different PPLN structures were investigated and effective aperture for effective pumping was defined.

  7. Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution

    DTIC Science & Technology

    1989-11-01

    digital signal processing view of strip-mapping synthetic aperture radar," M.S. thesis , University of Illinois, Urbana, IL,1988." [571 David C. Munson...TOMOGRAPHIC PROCESSING OF 1 SYNTHETIC APERTURE I RADAR SIGNALS FOR ENHANCED RESOLUTION,I * Jerald Lee Bauck DTIC ELECTE JAN2419901D I I UNIVERSITY OF ILLINOIS...NC 27709-2211 ELEMENT NO. NO. NO CCESSION NO. 11i. TITLE (Include Security Classification) TOMOGRAPHIC PROCESSING OF SYNTHETIC APERTURE RADlAR SIGNALS

  8. Detection of and compensation for blocked elements using large coherent apertures: ex vivo studies

    NASA Astrophysics Data System (ADS)

    Jakovljevic, Marko; Bottenus, Nick; Kuo, Lily; Kumar, Shalki; Dahl, Jeremy; Trahey, Gregg

    2016-04-01

    When imaging with ultrasound through the chest wall, it is not uncommon for parts of the array to get blocked by ribs, which can limit the acoustic window and significantly impede visualization of the structures of interest. With the development of large-aperture, high-element-count, 2-D arrays and their potential use in transthoracic imaging, detecting and compensating for the blocked elements is becoming increasingly important. We synthesized large coherent 2-D apertures and used them to image a point target through excised samples of canine chest wall. Blocked elements are detected based on low amplitude of their signals. As a part of compensation, blocked elements are turned off on transmit (Tx) and receive (Rx), and point-target images are created using: coherent summation of the remaining channels, compounding of intercostal apertures, and adaptive weighting of the available Tx/Rx channel-pairs to recover the desired k-space response. The adaptive compensation method also includes a phase aberration correction to ensure that the non-blocked Tx/Rx channel pairs are summed coherently. To evaluate the methods, we compare the point-spread functions (PSFs) and near-field clutter levels for the transcostal and control acquisitions. Specifically, applying k-space compensation to the sparse aperture data created from the control acquisition reduces sidelobes from -6.6 dB to -12 dB. When applied to the transcostal data in combination with phase-aberration correction, the same method reduces sidelobes only by 3 dB, likely due to significant tissue induced acoustic noise. For the transcostal acquisition, turning off blocked elements and applying uniform weighting results in maximum clutter reduction of 5 dB on average, while the PSF stays intact. Compounding reduces clutter by about 3 dB while the k-space compensation increases clutter magnitude to the non-compensated levels.

  9. Functionalized apertures for the detection of chemical and biological materials

    DOEpatents

    Letant, Sonia E.; van Buuren, Anthony W.; Terminello, Louis J.; Thelen, Michael P.; Hope-Weeks, Louisa J.; Hart, Bradley R.

    2010-12-14

    Disclosed are nanometer to micron scale functionalized apertures constructed on a substrate made of glass, carbon, semiconductors or polymeric materials that allow for the real time detection of biological materials or chemical moieties. Many apertures can exist on one substrate allowing for the simultaneous detection of numerous chemical and biological molecules. One embodiment features a macrocyclic ring attached to cross-linkers, wherein the macrocyclic ring has a biological or chemical probe extending through the aperture. Another embodiment achieves functionalization by attaching chemical or biological anchors directly to the walls of the apertures via cross-linkers.

  10. Diffraction from oxide confinement apertures in vertical-cavity lasers

    SciTech Connect

    Roos, P.A.; Carlsten, J.L.; Kilper, D.C.; Lear, K.L.

    1999-08-01

    Direct measurement of scattered fields from oxide confinement apertures in vertical-cavity lasers is presented. Diffraction fringes associated with each transverse lasing mode are detected in the far field from devices with varying oxide aperture dimensions and with quantum efficiencies as high as 48{percent}. The diffracted pattern symmetries match the rectangular symmetry of the oxide apertures present in the devices and fringe locations are compared to Fraunhofer theory. The fraction of power diffracted from the lasing mode remains roughly constant as a function of relative pump rate, but is shown to depend on both transverse mode order and oxide aperture size. {copyright} {ital 1999 American Institute of Physics.}

  11. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  12. Advanced methods in synthetic aperture radar imaging

    NASA Astrophysics Data System (ADS)

    Kragh, Thomas

    2012-02-01

    For over 50 years our world has been mapped and measured with synthetic aperture radar (SAR). A SAR system operates by transmitting a series of wideband radio-frequency pulses towards the ground and recording the resulting backscattered electromagnetic waves as the system travels along some one-dimensional trajectory. By coherently processing the recorded backscatter over this extended aperture, one can form a high-resolution 2D intensity map of the ground reflectivity, which we call a SAR image. The trajectory, or synthetic aperture, is achieved by mounting the radar on an aircraft, spacecraft, or even on the roof of a car traveling down the road, and allows for a diverse set of applications and measurement techniques for remote sensing applications. It is quite remarkable that the sub-centimeter positioning precision and sub-nanosecond timing precision required to make this work properly can in fact be achieved under such real-world, often turbulent, vibrationally intensive conditions. Although the basic principles behind SAR imaging and interferometry have been known for decades, in recent years an explosion of data exploitation techniques enabled by ever-faster computational horsepower have enabled some remarkable advances. Although SAR images are often viewed as simple intensity maps of ground reflectivity, SAR is also an exquisitely sensitive coherent imaging modality with a wealth of information buried within the phase information in the image. Some of the examples featured in this presentation will include: (1) Interferometric SAR, where by comparing the difference in phase between two SAR images one can measure subtle changes in ground topography at the wavelength scale. (2) Change detection, in which carefully geolocated images formed from two different passes are compared. (3) Multi-pass 3D SAR tomography, where multiple trajectories can be used to form 3D images. (4) Moving Target Indication (MTI), in which Doppler effects allow one to detect and

  13. Sinusoidal Coding.

    DTIC Science & Technology

    1995-01-01

    then made the filter bank pitch-adaptive thus ensuring roughly one sine wave per filter . The analysis in these systems does not explicitly model and...estimate the sine- wave components, but rather views them as outputs of a bank of uniformly-spaced bandpass filters . The synthesis waveform can be...viewed as a sum of the modified outputs of this filter bank . Although speech of good quality has reportedly been synthesized using these techniques

  14. Video coding with dynamic background

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.

  15. A secure and efficient entropy coding based on arithmetic coding

    NASA Astrophysics Data System (ADS)

    Li, Hengjian; Zhang, Jiashu

    2009-12-01

    A novel security arithmetic coding scheme based on nonlinear dynamic filter (NDF) with changeable coefficients is proposed in this paper. The NDF is employed to generate the pseudorandom number generator (NDF-PRNG) and its coefficients are derived from the plaintext for higher security. During the encryption process, the mapping interval in each iteration of arithmetic coding (AC) is decided by both the plaintext and the initial values of NDF, and the data compression is also achieved with entropy optimality simultaneously. And this modification of arithmetic coding methodology which also provides security is easy to be expanded into the most international image and video standards as the last entropy coding stage without changing the existing framework. Theoretic analysis and numerical simulations both on static and adaptive model show that the proposed encryption algorithm satisfies highly security without loss of compression efficiency respect to a standard AC or computation burden.

  16. Very high numerical aperture light transmitting device

    DOEpatents

    Allison, Stephen W.; Boatner, Lynn A.; Sales, Brian C.

    1998-01-01

    A new light-transmitting device using a SCIN glass core and a novel calcium sodium cladding has been developed. The very high index of refraction, radiation hardness, similar solubility for rare earths and similar melt and viscosity characteristics of core and cladding materials makes them attractive for several applications such as high-numerical-aperture optical fibers and specialty lenses. Optical fibers up to 60 m in length have been drawn, and several simple lenses have been designed, ground, and polished. Preliminary results on the ability to directly cast optical components of lead-indium phosphate glass are also discussed as well as the suitability of these glasses as a host medium for rare-earth ion lasers and amplifiers.

  17. Automated change detection for synthetic aperture sonar

    NASA Astrophysics Data System (ADS)

    G-Michael, Tesfaye; Marchand, Bradley; Tucker, J. D.; Sternlicht, Daniel D.; Marston, Timothy M.; Azimi-Sadjadi, Mahmood R.

    2014-05-01

    In this paper, an automated change detection technique is presented that compares new and historical seafloor images created with sidescan synthetic aperture sonar (SAS) for changes occurring over time. The method consists of a four stage process: a coarse navigational alignment; fine-scale co-registration using the scale invariant feature transform (SIFT) algorithm to match features between overlapping images; sub-pixel co-registration to improves phase coherence; and finally, change detection utilizing canonical correlation analysis (CCA). The method was tested using data collected with a high-frequency SAS in a sandy shallow-water environment. By using precise co-registration tools and change detection algorithms, it is shown that the coherent nature of the SAS data can be exploited and utilized in this environment over time scales ranging from hours through several days.

  18. Optical aperture synthesis with electronically connected telescopes.

    PubMed

    Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D

    2015-04-16

    Highest resolution imaging in astronomy is achieved by interferometry, connecting telescopes over increasingly longer distances and at successively shorter wavelengths. Here, we present the first diffraction-limited images in visual light, produced by an array of independent optical telescopes, connected electronically only, with no optical links between them. With an array of small telescopes, second-order optical coherence of the sources is measured through intensity interferometry over 180 baselines between pairs of telescopes, and two-dimensional images reconstructed. The technique aims at diffraction-limited optical aperture synthesis over kilometre-long baselines to reach resolutions showing details on stellar surfaces and perhaps even the silhouettes of transiting exoplanets. Intensity interferometry circumvents problems of atmospheric turbulence that constrain ordinary interferometry. Since the electronic signal can be copied, many baselines can be built up between dispersed telescopes, and over long distances. Using arrays of air Cherenkov telescopes, this should enable the optical equivalent of interferometric arrays currently operating at radio wavelengths.

  19. Saturation of the Large Aperture Scintillometer

    NASA Astrophysics Data System (ADS)

    Kohsiek, W.; Meijninger, W. M. L.; Debruin, H. A. R.; Beyrich, F.

    2006-10-01

    The saturation aspects of a large aperture (0.3 m) scintillometer operating over a 10-km path were investigated. Measurements were made over mainly forested, hilly terrain with typical maximum sensible heat fluxes of 300-400 W m -2, and over flat terrain with mainly grass, and typical maximum heat fluxes of 100-150 W m-2. Scintillometer-based fluxes were compared with eddy-correlation observations. Two different schemes for calculating the reduction of scintillation caused by saturation were applied: one based on the work of Hill and Clifford, the other based on Frehlich and Ochs. Without saturation correction, the scintillation fluxes were lower than the eddy-correlation fluxes; the saturation correction according to Frehlich and Ochs increased the scintillometer fluxes to an unrealistic level. Correcting the fluxes after the theory of the Hill and Clifford gave satisfying results

  20. Bistatic synthetic aperture radar using two satellites

    NASA Technical Reports Server (NTRS)

    Tomiyasu, K.

    1978-01-01

    The paper demonstrates the feasibility of a bistatic synthetic aperture radar (BISAR) utilizing two satellites. The proposed BISAR assumes that the direction of the two narrow antenna beams are programmed to coincide over the desired area to be imaged. Functionally, the transmitter and receiver portions can be interchanged between the two satellites. The two satellites may be in one orbit plane or two different orbits such as geosynchronous and low-earth orbits. The pulse repetition frequency and imaging geometry are constrained by contours of isodops and isodels. With two images of the same area viewed from different angles, it is possible in principle to derive three-dimensional stereo images. Applications of BISAR include topography, water resource management, and soil moisture determination.. Advantages of BISAR over a monostatic SAR are mentioned, including lower transmitter power and greater ranges in incidence angle and coverage.