Science.gov

Sample records for adaptive coded aperture

  1. Motion-adaptive compressive coded apertures

    NASA Astrophysics Data System (ADS)

    Harmany, Zachary T.; Oh, Albert; Marcia, Roummel; Willett, Rebecca

    2011-09-01

    This paper describes an adaptive compressive coded aperture imaging system for video based on motion-compensated video sparsity models. In particular, motion models based on optical flow and sparse deviations from optical flow (i.e. salient motion) can be used to (a) predict future video frames from previous compressive measurements, (b) perform reconstruction using efficient online convex programming techniques, and (c) adapt the coded aperture to yield higher reconstruction fidelity in the vicinity of this salient motion.

  2. Adaptive coded aperture imaging: progress and potential future applications

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Isser, Abraham; Gigioli, George W., Jr.

    2011-09-01

    Interest in Adaptive Coded Aperture Imaging (ACAI) continues to grow as the optical and systems engineering community becomes increasingly aware of ACAI's potential benefits in the design and performance of both imaging and non-imaging systems , such as good angular resolution (IFOV), wide distortion-free field of view (FOV), excellent image quality, and light weight construct. In this presentation we first review the accomplishments made over the past five years, then expand on previously published work to show how replacement of conventional imaging optics with coded apertures can lead to a reduction in system size and weight. We also present a trade space analysis of key design parameters of coded apertures and review potential applications as replacement for traditional imaging optics. Results will be presented, based on last year's work of our investigation into the trade space of IFOV, resolution, effective focal length, and wavelength of incident radiation for coded aperture architectures. Finally we discuss the potential application of coded apertures for replacing objective lenses of night vision goggles (NVGs).

  3. Adaptation of a neutron diffraction detector to coded aperture imaging

    SciTech Connect

    Vanier, P.E.; Forman, L.

    1997-02-01

    A coded aperture neutron imaging system developed at Brookhaven National Laboratory (BNL) has demonstrated that it is possible to record not only a flux of thermal neutrons at some position, but also the directions from whence they came. This realization of an idea which defied the conventional wisdom has provided a device which has never before been available to the nuclear physics community. A number of potential applications have been explored, including (1) counting warheads on a bus or in a storage area, (2) investigating inhomogeneities in drums of Pu-containing waste to facilitate non-destructive assays, (3) monitoring of vaults containing accountable materials, (4) detection of buried land mines, and (5) locating solid deposits of nuclear material held up in gaseous diffusion plants.

  4. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  5. Nanoparticle-dispersed metamaterial sensors for adaptive coded aperture imaging applications

    NASA Astrophysics Data System (ADS)

    Nehmetallah, Georges; Banerjee, Partha; Aylo, Rola; Rogers, Stanley

    2011-09-01

    We propose tunable single-layer and multi-layer (periodic and with defect) structures comprising nanoparticle dispersed metamaterials in suitable hosts, including adaptive coded aperture constructs, for possible Adaptive Coded Aperture Imaging (ACAI) applications such as in microbolometry, pressure/temperature sensors, and directed energy transfer, over a wide frequency range, from visible to terahertz. These structures are easy to fabricate, are low-cost and tunable, and offer enhanced functionality, such as perfect absorption (in the case of bolometry) and low cross-talk (for sensors). Properties of the nanoparticle dispersed metamaterial are determined using effective medium theory.

  6. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    NASA Astrophysics Data System (ADS)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  7. Dynamic optical aberration correction with adaptive coded apertures techniques in conformal imaging

    NASA Astrophysics Data System (ADS)

    Li, Yan; Hu, Bin; Zhang, Pengbin; Zhang, Binglong

    2015-02-01

    Conformal imaging systems are confronted with dynamic aberration in optical design processing. In classical optical designs, for combination high requirements of field of view, optical speed, environmental adaption and imaging quality, further enhancements can be achieved only by the introduction of increased complexity of aberration corrector. In recent years of computational imaging, the adaptive coded apertures techniques which has several potential advantages over more traditional optical systems is particularly suitable for military infrared imaging systems. The merits of this new concept include low mass, volume and moments of inertia, potentially lower costs, graceful failure modes, steerable fields of regard with no macroscopic moving parts. Example application for conformal imaging system design where the elements of a set of binary coded aperture masks are applied are optimization designed is presented in this paper, simulation results show that the optical performance is closely related to the mask design and the reconstruction algorithm optimization. As a dynamic aberration corrector, a binary-amplitude mask located at the aperture stop is optimized to mitigate dynamic optical aberrations when the field of regard changes and allow sufficient information to be recorded by the detector for the recovery of a sharp image using digital image restoration in conformal optical system.

  8. An adaptive coded aperture imager: building, testing and trialing a super-resolving terrestrial demonstrator

    NASA Astrophysics Data System (ADS)

    Slinger, Christopher W.; Bennett, Charlotte R.; Dyer, Gavin; Gilholm, Kevin; Gordon, Neil; Huckridge, David; McNie, Mark; Penney, Richard W.; Proudler, Ian K.; Rice, Kevin; Ridley, Kevin D.; Russell, Lee; de Villiers, Geoffrey D.; Watson, Philip J.

    2011-09-01

    There is an increasingly important requirement for day and night, wide field of view imaging and tracking for both imaging and sensing applications. Applications include military, security and remote sensing. We describe the development of a proof of concept demonstrator of an adaptive coded-aperture imager operating in the mid-wave infrared to address these requirements. This consists of a coded-aperture mask, a set of optics and a 4k x 4k focal plane array (FPA). This system can produce images with a resolution better than that achieved by the detector pixel itself (i.e. superresolution) by combining multiple frames of data recorded with different coded-aperture mask patterns. This superresolution capability has been demonstrated both in the laboratory and in imaging of real-world scenes, the highest resolution achieved being ½ the FPA pixel pitch. The resolution for this configuration is currently limited by vibration and theoretically ¼ pixel pitch should be possible. Comparisons have been made between conventional and ACAI solutions to these requirements and show significant advantages in size, weight and cost for the ACAI approach.

  9. An experimental infrared sensor using adaptive coded apertures for enhanced resolution

    NASA Astrophysics Data System (ADS)

    Gordon, Neil T.; de Villiers, Geoffrey D.; Ridley, Kevin D.; Bennett, Charlotte R.; McNie, Mark E.; Proudler, Ian K.; Russell, Lee; Slinger, Christopher W.; Gilholm, Kevin

    2010-08-01

    Adaptive coded aperture imaging (ACAI) has the potential to enhance greatly the performance of sensing systems by allowing sub detector pixel image and tracking resolution. A small experimental system has been set up to allow the practical demonstration of these benefits in the mid infrared, as well as investigating the calibration and stability of the system. The system can also be used to test modeling of similar ACAI systems in the infrared. The demonstrator can use either a set of fixed masks or a novel MOEMS adaptive transmissive spatial light modulator. This paper discusses the design and testing of the system including the development of novel decoding algorithms and some initial imaging results are presented.

  10. Utilizing micro-electro-mechanical systems (MEMS) micro-shutter designs for adaptive coded aperture imaging (ACAI) technologies

    NASA Astrophysics Data System (ADS)

    Ledet, Mary M.; Starman, LaVern A.; Coutu, Ronald A., Jr.; Rogers, Stanley

    2009-08-01

    Coded aperture imaging (CAI) has been used in both the astronomical and medical communities for years due to its ability to image light at short wavelengths and thus replacing conventional lenses. Where CAI is limited, adaptive coded aperture imaging (ACAI) can recover what is lost. The use of photonic micro-electro-mechanical-systems (MEMS) for creating adaptive coded apertures has been gaining momentum since 2007. Successful implementation of micro-shutter technologies would potentially enable the use of adaptive coded aperture imaging and non-imaging systems in current and future military surveillance and intelligence programs. In this effort, a prototype of MEMS microshutters has been designed and fabricated onto a 3 mm x 3 mm square of silicon substrate using the PolyMUMPSTM process. This prototype is a line-drivable array using thin flaps of polysilicon to cover and uncover an 8 x 8 array of 20 μm apertures. A characterization of the micro-shutters to include mechanical, electrical and optical properties is provided. This prototype, its actuation scheme, and other designs for individual microshutters have been modeled and studied for feasibility purposes. In addition, microshutters fabricated from an Al-Au alloy on a quartz wafer were optically tested and characterized with a 632 nm HeNe laser.

  11. Coded aperture computed tomography

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Brady, David J.

    2009-08-01

    Diverse physical measurements can be modeled by X-ray transforms. While X-ray tomography is the canonical example, reference structure tomography (RST) and coded aperture snapshot spectral imaging (CASSI) are examples of physically unrelated but mathematically equivalent sensor systems. Historically, most x-ray transform based systems sample continuous distributions and apply analytical inversion processes. On the other hand, RST and CASSI generate discrete multiplexed measurements implemented with coded apertures. This multiplexing of coded measurements allows for compression of measurements from a compressed sensing perspective. Compressed sensing (CS) is a revelation that if the object has a sparse representation in some basis, then a certain number, but typically much less than what is prescribed by Shannon's sampling rate, of random projections captures enough information for a highly accurate reconstruction of the object. This paper investigates the role of coded apertures in x-ray transform measurement systems (XTMs) in terms of data efficiency and reconstruction fidelity from a CS perspective. To conduct this, we construct a unified analysis using RST and CASSI measurement models. Also, we propose a novel compressive x-ray tomography measurement scheme which also exploits coding and multiplexing, and hence shares the analysis of the other two XTMs. Using this analysis, we perform a qualitative study on how coded apertures can be exploited to implement physical random projections by "regularizing" the measurement systems. Numerical studies and simulation results demonstrate several examples of the impact of coding.

  12. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  13. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  14. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  15. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  16. Longwave infrared (LWIR) coded aperture dispersive spectrometer.

    PubMed

    Fernandez, C; Guenther, B D; Gehm, M E; Brady, D J; Sullivan, M E

    2007-04-30

    We describe a static aperture-coded, dispersive longwave infrared (LWIR) spectrometer that uses a microbolometer array at the detector plane. The two-dimensional aperture code is based on a row-doubled Hadamard mask with transmissive and opaque openings. The independent column code nature of the matrix makes for a mathematically well-defined pattern that spatially and spectrally maps the source information to the detector plane. Post-processing techniques on the data provide spectral estimates of the source. Comparative experimental results between a slit and coded aperture for emission spectroscopy from a CO(2) laser are demonstrated. PMID:19532832

  17. Coded apertures for efficient pyroelectric motion tracking.

    PubMed

    Gopinathan, U; Brady, D; Pitsianis, N

    2003-09-01

    Coded apertures may be designed to modulate the visibility between source and measurement spaces such that the position of a source among N resolution cells may be discriminated using logarithm of N measurements. We use coded apertures as reference structures in a pyroelectric motion tracking system. This sensor system is capable of detecting source motion in one of the 15 cells uniformly distributed over a 1.6m x 1.6m domain using 4 pyroelectric detectors. PMID:19466102

  18. Fast decoding algorithms for coded aperture systems

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    2014-08-01

    Fast decoding algorithms are described for a number of established coded aperture systems. The fast decoding algorithms for all these systems offer significant reductions in the number of calculations required when reconstructing images formed by a coded aperture system and hence require less computation time to produce the images. The algorithms may therefore be of use in applications that require fast image reconstruction, such as near real-time nuclear medicine and location of hazardous radioactive spillage. Experimental tests confirm the efficacy of the fast decoding techniques.

  19. Class of near-perfect coded apertures

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.

  20. A Germanium-Based, Coded Aperture Imager

    SciTech Connect

    Ziock, K P; Madden, N; Hull, E; William, C; Lavietes, T; Cork, C

    2001-10-31

    We describe a coded-aperture based, gamma-ray imager that uses a unique hybrid germanium detector system. A planar, germanium strip detector, eleven millimeters thick is followed by a coaxial detector. The 19 x 19 strip detector (2 mm pitch) is used to determine the location and energy of low energy events. The location of high energy events are determined from the location of the Compton scatter in the planar detector and the energy is determined from the sum of the coaxial and planar energies. With this geometry, we obtain useful quantum efficiency in a position-sensitive mode out to 500 keV. The detector is used with a 19 x 17 URA coded aperture to obtain spectrally resolved images in the gamma-ray band. We discuss the performance of the planar detector, the hybrid system and present images taken of laboratory sources.

  1. Complementary lattice arrays for coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Ding, Jie; Noshad, Mohammad; Tarokh, Vahid

    2016-05-01

    In this work, we consider complementary lattice arrays in order to enable a broader range of designs for coded aperture imaging systems. We provide a general framework and methods that generate richer and more flexible designs than existing ones. Besides this, we review and interpret the state-of-the-art uniformly redundant arrays (URA) designs, broaden the related concepts, and further propose some new design methods.

  2. Coded-aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-11-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  3. Coded-aperture imaging in nuclear medicine

    NASA Technical Reports Server (NTRS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-01-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  4. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  5. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  6. Large aperture adaptive optics for intense lasers

    NASA Astrophysics Data System (ADS)

    Deneuville, François; Ropert, Laurent; Sauvageot, Paul; Theis, Sébastien

    2015-05-01

    ISP SYSTEM has developed a range of large aperture electro-mechanical deformable mirrors (DM) suitable for ultra short pulsed intense lasers. The design of the MD-AME deformable mirror is based on force application on numerous locations thanks to electromechanical actuators driven by stepper motors. DM design and assembly method have been adapted to large aperture beams and the performances were evaluated on a first application for a beam with a diameter of 250mm at 45° angle of incidence. A Strehl ratio above 0.9 was reached for this application. Simulations were correlated with measurements on optical bench and the design has been validated by calculation for very large aperture (up to Ø550mm). Optical aberrations up to Zernike order 5 can be corrected with a very low residual error as for actual MD-AME mirror. Amplitude can reach up to several hundreds of μm for low order corrections. Hysteresis is lower than 0.1% and linearity better than 99%. Contrary to piezo-electric actuators, the μ-AME actuators avoid print-through effects and they permit to keep the mirror shape stable even unpowered, providing a high resistance to electro-magnetic pulses. The MD-AME mirrors can be adapted to circular, square or elliptical beams and they are compatible with all dielectric or metallic coatings.

  7. Development of large aperture composite adaptive optics

    NASA Astrophysics Data System (ADS)

    Kmetik, Viliam; Vitovec, Bohumil; Jiran, Lukas; Nemcova, Sarka; Zicha, Josef; Inneman, Adolf; Mikulickova, Lenka; Pavlica, Richard

    2015-01-01

    Large aperture composite adaptive optics for laser applications is investigated in cooperation of Institute of Plasma Physic, Department of Instrumentation and Control Engineering FME CTU and 5M Ltd. We are exploring opportunity of a large-size high-power-laser deformable-mirror production using a lightweight bimorph actuated structure with a composite core. In order to produce a sufficiently large operational free aperture we are developing new technologies for production of flexible core, bimorph actuator and deformable mirror reflector. Full simulation of a deformable-mirrors structure was prepared and validated by complex testing. A deformable mirror actuation and a response of a complicated structure are investigated for an accurate control of the adaptive optics. An original adaptive optics control system and a bimorph deformable mirror driver were developed. Tests of material samples, components and sub-assemblies were completed. A subscale 120 mm bimorph deformable mirror prototype was designed, fabricated and thoroughly tested. A large-size 300 mm composite-core bimorph deformable mirror was simulated and optimized, fabrication of a prototype is carried on. A measurement and testing facility is modified to accommodate large sizes optics.

  8. Coded aperture imaging for fluorescent x-rays

    SciTech Connect

    Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.

    2014-06-15

    We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.

  9. Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications

    SciTech Connect

    Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth

    2013-06-01

    Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.

  10. Telescope Adaptive Optics Code

    SciTech Connect

    Phillion, D.

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The default parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST

  11. Adaptive SPECT imaging with crossed-slit apertures

    PubMed Central

    Durko, Heather L.; Furenlid, Lars R.

    2015-01-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the progression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code. PMID:26190884

  12. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  13. High transparency coded apertures in planar nuclear medicine imaging.

    PubMed

    Starfield, David M; Rubin, David M; Marwala, Tshilidzi

    2007-01-01

    Coded apertures provide an alternative to the collimators of nuclear medicine imaging, and advances in the field have lessened the artifacts that are associated with the near-field geometry. Thickness of the aperture material, however, results in a decoded image with thickness artifacts, and constrains both image resolution and the available manufacturing techniques. Thus in theory, thin apertures are clearly desirable, but high transparency leads to a loss of contrast in the recorded data. Coupled with the quantization effects of detectors, this leads to significant noise in the decoded image. This noise must be dependent on the bit-depth of the gamma camera. If there are a sufficient number of measurable values, high transparency need not adversely affect the signal-to-noise ratio. This novel hypothesis is tested by means of a ray-tracing computer simulator. The simulation results presented in the paper show that replacing a highly opaque coded aperture with a highly transparent aperture, simulated with an 8-bit gamma camera, worsens the root-mean-square error measurement. However, when simulated with a 16-bit gamma camera, a highly transparent coded aperture significantly reduces both thickness artifacts and the root-mean-square error measurement. PMID:18002997

  14. Coded aperture design in mismatched compressive spectral imaging.

    PubMed

    Galvis, Laura; Arguello, Henry; Arce, Gonzalo R

    2015-11-20

    Compressive spectral imaging (CSI) senses a scene by using two-dimensional coded projections such that the number of measurements is far less than that used in spectral scanning-type instruments. An architecture that efficiently implements CSI is the coded aperture snapshot spectral imager (CASSI). A physical limitation of the CASSI is the system resolution, which is determined by the lowest resolution element used in the detector and the coded aperture. Although the final resolution of the system is usually given by the detector, in the CASSI, for instance, the use of a low resolution coded aperture implemented using a digital micromirror device (DMD), which induces the grouping of pixels in superpixels in the detector, is decisive to the final resolution. The mismatch occurs by the differences in the pitch size of the DMD mirrors and focal plane array (FPA) pixels. A traditional solution to this mismatch consists of grouping several pixels in square features, which subutilizes the DMD and the detector resolution and, therefore, reduces the spatial and spectral resolution of the reconstructed spectral images. This paper presents a model for CASSI which admits the mismatch and permits exploiting the maximum resolution of the coding element and the FPA sensor. A super-resolution algorithm and a synthetic coded aperture are developed in order to solve the mismatch. The mathematical models are verified using a real implementation of CASSI. The results of the experiments show a significant gain in spatial and spectral imaging quality over the traditional grouping pixel technique. PMID:26836551

  15. Coded aperture imaging with a HURA coded aperture and a discrete pixel detector

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.

  16. Two-Dimensional Aperture Coding for Magnetic Sector Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Russell, Zachary E.; Chen, Evan X.; Amsden, Jason J.; Wolter, Scott D.; Danell, Ryan M.; Parker, Charles B.; Stoner, Brian R.; Gehm, Michael E.; Brady, David J.; Glass, Jeffrey T.

    2015-02-01

    In mass spectrometer design, there has been a historic belief that there exists a fundamental trade-off between instrument size, throughput, and resolution. When miniaturizing a traditional system, performance loss in either resolution or throughput would be expected. However, in optical spectroscopy, both one-dimensional (1D) and two-dimensional (2D) aperture coding have been used for many years to break a similar trade-off. To provide a viable path to miniaturization for harsh environment field applications, we are investigating similar concepts in sector mass spectrometry. Recently, we demonstrated the viability of 1D aperture coding and here we provide a first investigation of 2D coding. In coded optical spectroscopy, 2D coding is preferred because of increased measurement diversity for improved conditioning and robustness of the result. To investigate its viability in mass spectrometry, analytes of argon, acetone, and ethanol were detected using a custom 90-degree magnetic sector mass spectrometer incorporating 2D coded apertures. We developed a mathematical forward model and reconstruction algorithm to successfully reconstruct the mass spectra from the 2D spatially coded ion positions. This 2D coding enabled a 3.5× throughput increase with minimal decrease in resolution. Several challenges were overcome in the mass spectrometer design to enable this coding, including the need for large uniform ion flux, a wide gap magnetic sector that maintains field uniformity, and a high resolution 2D detection system for ion imaging. Furthermore, micro-fabricated 2D coded apertures incorporating support structures were developed to provide a viable design that allowed ion transmission through the open elements of the code.

  17. Two-dimensional aperture coding for magnetic sector mass spectrometry.

    PubMed

    Russell, Zachary E; Chen, Evan X; Amsden, Jason J; Wolter, Scott D; Danell, Ryan M; Parker, Charles B; Stoner, Brian R; Gehm, Michael E; Brady, David J; Glass, Jeffrey T

    2015-02-01

    In mass spectrometer design, there has been a historic belief that there exists a fundamental trade-off between instrument size, throughput, and resolution. When miniaturizing a traditional system, performance loss in either resolution or throughput would be expected. However, in optical spectroscopy, both one-dimensional (1D) and two-dimensional (2D) aperture coding have been used for many years to break a similar trade-off. To provide a viable path to miniaturization for harsh environment field applications, we are investigating similar concepts in sector mass spectrometry. Recently, we demonstrated the viability of 1D aperture coding and here we provide a first investigation of 2D coding. In coded optical spectroscopy, 2D coding is preferred because of increased measurement diversity for improved conditioning and robustness of the result. To investigate its viability in mass spectrometry, analytes of argon, acetone, and ethanol were detected using a custom 90-degree magnetic sector mass spectrometer incorporating 2D coded apertures. We developed a mathematical forward model and reconstruction algorithm to successfully reconstruct the mass spectra from the 2D spatially coded ion positions. This 2D coding enabled a 3.5× throughput increase with minimal decrease in resolution. Several challenges were overcome in the mass spectrometer design to enable this coding, including the need for large uniform ion flux, a wide gap magnetic sector that maintains field uniformity, and a high resolution 2D detection system for ion imaging. Furthermore, micro-fabricated 2D coded apertures incorporating support structures were developed to provide a viable design that allowed ion transmission through the open elements of the code. PMID:25510933

  18. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features. Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths. As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone

  19. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    NASA Astrophysics Data System (ADS)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  20. Colored coded-apertures for spectral image unmixing

    NASA Astrophysics Data System (ADS)

    Vargas, Hector M.; Arguello Fuentes, Henry

    2015-10-01

    Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

  1. Two-Sided Coded Aperture Imaging Without a Detector Plane

    SciTech Connect

    Ziock, Klaus-Peter; Cunningham, Mark F; Fabris, Lorenzo

    2009-01-01

    We introduce a novel design for a two-sided, coded-aperture, gamma-ray imager suitable for use in stand off detection of orphan radioactive sources. The design is an extension of an active-mask imager that would have three active planes of detector material, a central plane acting as the detector for two (active) coded-aperture mask planes, one on either side of the detector plane. In the new design the central plane is removed and the mask on the left (right) serves as the detector plane for the mask on the right (left). This design reduces the size, mass, complexity, and cost of the overall instrument. In addition, if one has fully position-sensitive detectors, then one can use the two planes as a classic Compton camera. This enhances the instrument's sensitivity at higher energies where the coded-aperture efficiency is decreased by mask penetration. A plausible design for the system is found and explored with Monte Carlo simulations.

  2. Adaptive Full Aperture Wavefront Sensor Study

    NASA Technical Reports Server (NTRS)

    Robinson, William G.

    1997-01-01

    This grant and the work described was in support of a Seven Segment Demonstrator (SSD) and review of wavefront sensing techniques proposed by the Government and Contractors for the Next Generation Space Telescope (NGST) Program. A team developed the SSD concept. For completeness, some of the information included in this report has also been included in the final report of a follow-on contract (H-27657D) entitled "Construction of Prototype Lightweight Mirrors". The original purpose of this GTRI study was to investigate how various wavefront sensing techniques might be most effectively employed with large (greater than 10 meter) aperture space based telescopes used for commercial and scientific purposes. However, due to changes in the scope of the work performed on this grant and in light of the initial studies completed for the NGST program, only a portion of this report addresses wavefront sensing techniques. The wavefront sensing techniques proposed by the Government and Contractors for the NGST were summarized in proposals and briefing materials developed by three study teams including NASA Goddard Space Flight Center, TRW, and Lockheed-Martin. In this report, GTRI reviews these approaches and makes recommendations concerning the approaches. The objectives of the SSD were to demonstrate functionality and performance of a seven segment prototype array of hexagonal mirrors and supporting electromechanical components which address design issues critical to space optics deployed in large space based telescopes for astronomy and for optics used in spaced based optical communications systems. The SSD was intended to demonstrate technologies which can support the following capabilities: Transportation in dense packaging to existing launcher payload envelopes, then deployable on orbit to form a space telescope with large aperture. Provide very large (greater than 10 meters) primary reflectors of low mass and cost. Demonstrate the capability to form a segmented primary or

  3. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  4. Hyperspectral pixel classification from coded-aperture compressive imaging

    NASA Astrophysics Data System (ADS)

    Ramirez, Ana; Arce, Gonzalo R.; Sadler, Brian M.

    2012-06-01

    This paper describes a new approach and its associated theoretical performance guarantees for supervised hyperspectral image classification from compressive measurements obtained by a Coded Aperture Snapshot Spectral Imaging System (CASSI). In one snapshot, the two-dimensional focal plane array (FPA) in the CASSI system captures the coded and spectrally dispersed source field of a three-dimensional data cube. Multiple snapshots are used to construct a set of compressive spectral measurements. The proposed approach is based on the concept that each pixel in the hyper-spectral image lies in a low-dimensional subspace obtained from the training samples, and thus it can be represented as a sparse linear combination of vectors in the given subspace. The sparse vector representing the test pixel is then recovered from the set of compressive spectral measurements and it is used to determine the class label of the test pixel. The theoretical performance bounds of the classifier exploit the distance preservation condition satisfied by the multiple shot CASSI system and depend on the number of measurements collected, code aperture pattern, and similarity between spectral signatures in the dictionary. Simulation experiments illustrate the performance of the proposed classification approach.

  5. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  6. Coded aperture Fast Neutron Analysis: Latest design advances

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto; Lanza, Richard C.

    2001-07-01

    Past studies have showed that materials of concern like explosives or narcotics can be identified in bulk from their atomic composition. Fast Neutron Analysis (FNA) is a nuclear method capable of providing this information even when considerable penetration is needed. Unfortunately, the cross sections of the nuclear phenomena and the solid angles involved are typically small, so that it is difficult to obtain high signal-to-noise ratios in short inspection times. CAFNAaims at combining the compound specificity of FNA with the potentially high SNR of Coded Apertures, an imaging method successfully used in far-field 2D applications. The transition to a near-field, 3D and high-energy problem prevents a straightforward application of Coded Apertures and demands a thorough optimization of the system. In this paper, the considerations involved in the design of a practical CAFNA system for contraband inspection, its conclusions, and an estimate of the performance of such a system are presented as the evolution of the ideas presented in previous expositions of the CAFNA concept.

  7. Correlated Statistical Uncertainties in Coded-Aperture Imaging

    SciTech Connect

    Fleenor, Matthew C; Blackston, Matthew A; Ziock, Klaus-Peter

    2014-01-01

    In nuclear security applications, coded-aperture imagers provide the opportu- nity for a wealth of information regarding the attributes of both the radioac- tive and non-radioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be deter- mined in a quantitative and statistically meaningful manner. To address the deficiency of quantifiable errors in coded-aperture imaging, we present uncer- tainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti- mask data are included. Utilizing simulated point source data, we found that correlations (and inverse correlations) arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations (and their inverses) was heightened by the process of over-sampling, while correla- tions were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explore how statistics-based alarming in nuclear security is impacted.

  8. XCAT: the JANUS x-ray coded aperture telescope

    NASA Astrophysics Data System (ADS)

    Falcone, A. D.; Burrows, D. N.; Barthelmy, S.; Chang, W.; Fox, D.; Fredley, J.; Gehrels, N.; Kelly, M.; Klar, R.; Palmer, D.; Persyn, S.; Reichard, K.; Roming, P.; Seifert, E.; Smith, R. W. M.; Wood, P.; Zugger, M.

    2010-07-01

    The JANUS mission concept is designed to study the high redshift universe using a small, agile Explorer class observatory. The primary science goals of JANUS are to use high redshift (6Coded Aperture Telescope (XCAT) and the Near-IR Telescope (NIRT) are the two primary instruments on JANUS. XCAT has been designed to detect bright X-ray flashes (XRFs) and gamma ray bursts (GRBs) in the 1-20 keV energy band over a wide field of view (4 steradians), thus facilitating the detection of z>6 XRFs/GRBs, which can be further studied by other instruments. XCAT would use a coded mask aperture design with hybrid CMOS Si detectors. It would be sensitive to XRFs and GRBs with flux in excess of approximately 240 mCrab. In order to obtain redshift measurements and accurate positions from the NIRT, the spacecraft is designed to rapidly slew to source positions following a GRB trigger from XCAT. XCAT instrument design parameters and science goals are presented in this paper.

  9. Correlated statistical uncertainties in coded-aperture imaging

    NASA Astrophysics Data System (ADS)

    Fleenor, Matthew C.; Blackston, Matthew A.; Ziock, Klaus P.

    2015-06-01

    In nuclear security applications, coded-aperture imagers can provide a wealth of information regarding the attributes of both the radioactive and nonradioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be determined in a quantitative and statistically meaningful manner. To address a deficiency of quantifiable errors in coded-aperture imaging, we present uncertainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti-mask data are included. Utilizing simulated point source data, we found that correlations arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations was heightened by the process of over-sampling, while correlations were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explored how statistics-based alarming is impacted in a radiological search scenario.

  10. Coded-aperture Raman imaging for standoff explosive detection

    NASA Astrophysics Data System (ADS)

    McCain, Scott T.; Guenther, B. D.; Brady, David J.; Krishnamurthy, Kalyani; Willett, Rebecca

    2012-06-01

    This paper describes the design of a deep-UV Raman imaging spectrometer operating with an excitation wavelength of 228 nm. The designed system will provide the ability to detect explosives (both traditional military explosives and home-made explosives) from standoff distances of 1-10 meters with an interrogation area of 1 mm x 1 mm to 200 mm x 200 mm. This excitation wavelength provides resonant enhancement of many common explosives, no background fluorescence, and an enhanced cross-section due to the inverse wavelength scaling of Raman scattering. A coded-aperture spectrograph combined with compressive imaging algorithms will allow for wide-area interrogation with fast acquisition rates. Coded-aperture spectral imaging exploits the compressibility of hyperspectral data-cubes to greatly reduce the amount of acquired data needed to interrogate an area. The resultant systems are able to cover wider areas much faster than traditional push-broom and tunable filter systems. The full system design will be presented along with initial data from the instrument. Estimates for area scanning rates and chemical sensitivity will be presented. The system components include a solid-state deep-UV laser operating at 228 nm, a spectrograph consisting of well-corrected refractive imaging optics and a reflective grating, an intensified solar-blind CCD camera, and a high-efficiency collection optic.

  11. Coded aperture systems as non-conventional lensless imagers for the visible and infrared

    NASA Astrophysics Data System (ADS)

    Slinger, Chris; Gordon, Neil; Lewis, Keith; McDonald, Gregor; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; De Villiers, Geoff; Wilson, Rebecca

    2007-10-01

    Coded aperture imaging (CAI) has been used extensively at gamma- and X-ray wavelengths, where conventional refractive and reflective techniques are impractical. CAI works by coding optical wavefronts from a scene using a patterned aperture, detecting the resulting intensity distribution, then using inverse digital signal processing to reconstruct an image. This paper will consider application of CAI to the visible and IR bands. Doing so has a number of potential advantages over existing imaging approaches at these longer wavelengths, including low mass, low volume, zero aberrations and distortions and graceful failure modes. Adaptive coded aperture (ACAI), facilitated by the use of a reconfigurable mask in a CAI configuration, adds further merits, an example being the ability to implement agile imaging modes with no macroscopic moving parts. However, diffraction effects must be considered and photon flux reductions can have adverse consequences on the image quality achievable. An analysis of these benefits and limitations is described, along with a description of a novel micro optical electro mechanical (MOEMS) microshutter technology for use in thermal band infrared ACAI systems. Preliminary experimental results are also presented.

  12. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  13. Adaptive Matching of the Scanning Aperture of the Environment Parameter

    NASA Astrophysics Data System (ADS)

    Choni, Yu. I.; Yunusov, N. N.

    2016-04-01

    We analyze a matching system for the scanning aperture antenna radiating through a layer with unpredictably changing parameters. Improved matching has been achieved by adaptive motion of a dielectric plate in the gap between the aperture and the radome. The system is described within the framework of an infinite layered structure. The validity of the model has been confirmed by numerical simulation using CST Microwave Studio software and by an experiment. It is shown that the reflection coefficient at the input of some types of a matching device, which is due to the deviation of the load impedance from the nominal value, is determined by a compact and versatile formula. The potential efficiency of the proposed matching system is shown by a specific example, and its dependence on the choice of the starting position of the dielectric plate is demonstrated.

  14. Adaptive Matching of the Scanning Aperture of the Environment Parameter

    NASA Astrophysics Data System (ADS)

    Choni, Yu. I.; Yunusov, N. N.

    2016-05-01

    We analyze a matching system for the scanning aperture antenna radiating through a layer with unpredictably changing parameters. Improved matching has been achieved by adaptive motion of a dielectric plate in the gap between the aperture and the radome. The system is described within the framework of an infinite layered structure. The validity of the model has been confirmed by numerical simulation using CST Microwave Studio software and by an experiment. It is shown that the reflection coefficient at the input of some types of a matching device, which is due to the deviation of the load impedance from the nominal value, is determined by a compact and versatile formula. The potential efficiency of the proposed matching system is shown by a specific example, and its dependence on the choice of the starting position of the dielectric plate is demonstrated.

  15. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  16. Implementation of Hadamard spectroscopy using MOEMS as a coded aperture

    NASA Astrophysics Data System (ADS)

    Vasile, T.; Damian, V.; Coltuc, D.; Garoi, F.; Udrea, C.

    2015-02-01

    Although nowadays spectrometers reached a high level of performance, output signals are often weak and traditional slit spectrometers still confronts the problem of poor optical throughput, minimizing their efficiency in low light setup conditions. In order to overcome these issues, Hadamard Spectroscopy (HS) was implemented in a conventional Ebert Fastie type of spectrometer setup, by substituting the exit slit with a digital micro-mirror device (DMD) who acts like a coded aperture. The theory behind HS and the functionality of the DMD are presented. The improvements brought using HS are enlightened by means of a spectrometric experiment and higher SNR spectrum is acquired. Comparative experiments were conducted in order to emphasize the SNR differences between HS and scanning slit method. Results provide a SNR gain of 3.35 favoring HS. One can conclude the HS method effectiveness to be a great asset for low light spectrometric experiments.

  17. Snapshot fan beam coded aperture coherent scatter tomography.

    PubMed

    Hassan, Mehadi; Greenberg, Joel A; Odinaka, Ikenna; Brady, David J

    2016-08-01

    We use coherently scattered X-rays to measure the molecular composition of an object throughout its volume. We image a planar slice of the object in a single snapshot by illuminating it with a fan beam and placing a coded aperture between the object and the detectors. We characterize the system and demonstrate a resolution of 13 mm in range and 2 mm in cross-range and a fractional momentum transfer resolution of 15%. In addition, we show that this technique allows a 100x speedup compared to previously-studied pencil beam systems using the same components. Finally, by scanning an object through the beam, we image the full 4-dimensional data cube (3 spatial and 1 material dimension) for complete volumetric molecular imaging. PMID:27505791

  18. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M.

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  19. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  20. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    NASA Astrophysics Data System (ADS)

    Zhang, F. N.; Hu, H. S.; Zhang, T. K.; Jia, Q. G.; Wang, D. M.; Jia, J.

    2015-12-01

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, "residual watermark," certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  1. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    SciTech Connect

    Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  2. A novel approach to correct the coded aperture misalignment for fast neutron imaging.

    PubMed

    Zhang, F N; Hu, H S; Zhang, T K; Jia, Q G; Wang, D M; Jia, J

    2015-12-01

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, "residual watermark," certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging. PMID:26724035

  3. Spatial super-resolution in code aperture spectral imaging

    NASA Astrophysics Data System (ADS)

    Arguello, Henry; Rueda, Hoover F.; Arce, Gonzalo R.

    2012-06-01

    The Code Aperture Snapshot Spectral Imaging system (CASSI) senses the spectral information of a scene using the underlying concepts of compressive sensing (CS). The random projections in CASSI are localized such that each measurement contains spectral information only from a small spatial region of the data cube. The goal of this paper is to translate high-resolution hyperspectral scenes into compressed signals measured by a low-resolution detector. Spatial super-resolution is attained as an inverse problem from a set of low-resolution coded measurements. The proposed system not only offers significant savings in size, weight and power, but also in cost as low resolution detectors can be used. The proposed system can be efficiently exploited in the IR region where the cost of detectors increases rapidly with resolution. The simulations of the proposed system show an improvement of up to 4 dB in PSNR. Results also show that the PSNR of the reconstructed data cubes approach the PSNR of the reconstructed data cubes attained with high-resolution detectors, at the cost of using additional measurements.

  4. Direct aperture optimization for online adaptive radiation therapy

    SciTech Connect

    Mestrovic, Ante; Milette, Marie-Pierre; Nichol, Alan; Clark, Brenda G.; Otto, Karl

    2007-05-15

    This paper is the first investigation of using direct aperture optimization (DAO) for online adaptive radiation therapy (ART). A geometrical model representing the anatomy of a typical prostate case was created. To simulate interfractional deformations, four different anatomical deformations were created by systematically deforming the original anatomy by various amounts (0.25, 0.50, 0.75, and 1.00 cm). We describe a series of techniques where the original treatment plan was adapted in order to correct for the deterioration of dose distribution quality caused by the anatomical deformations. We found that the average time needed to adapt the original plan to arrive at a clinically acceptable plan is roughly half of the time needed for a complete plan regeneration, for all four anatomical deformations. Furthermore, through modification of the DAO algorithm the optimization search space was reduced and the plan adaptation was significantly accelerated. For the first anatomical deformation (0.25 cm), the plan adaptation was six times more efficient than the complete plan regeneration. For the 0.50 and 0.75 cm deformations, the optimization efficiency was increased by a factor of roughly 3 compared to the complete plan regeneration. However, for the anatomical deformation of 1.00 cm, the reduction of the optimization search space during plan adaptation did not result in any efficiency improvement over the original (nonmodified) plan adaptation. The anatomical deformation of 1.00 cm demonstrates the limit of this approach. We propose an innovative approach to online ART in which the plan adaptation and radiation delivery are merged together and performed concurrently--adaptive radiation delivery (ARD). A fundamental advantage of ARD is the fact that radiation delivery can start almost immediately after image acquisition and evaluation. Most of the original plan adaptation is done during the radiation delivery, so the time spent adapting the original plan does not

  5. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  6. Event localization in bulk scintillator crystals using coded apertures

    NASA Astrophysics Data System (ADS)

    Ziock, K. P.; Braverman, J. B.; Fabris, L.; Harrison, M. J.; Hornback, D.; Newby, J.

    2015-06-01

    The localization of radiation interactions in bulk scintillators is generally limited by the size of the light distribution at the readout surface of the crystal/light-pipe system. By finding the centroid of the light spot, which is typically of order centimeters across, practical single-event localization is limited to ~2 mm/cm of crystal thickness. Similar resolution can also be achieved for the depth of interaction by measuring the size of the light spot. Through the use of near-field coded-aperture techniques applied to the scintillation light, light transport simulations show that for 3-cm-thick crystals, more than a five-fold improvement (millimeter spatial resolution) can be achieved both laterally and in event depth. At the core of the technique is the requirement to resolve the shadow from an optical mask placed in the scintillation light path between the crystal and the readout. In this paper, experimental results are presented that demonstrate the overall concept using a 1D shadow mask, a thin-scintillator crystal and a light pipe of varying thickness to emulate a 2.2-cm-thick crystal. Spatial resolutions of ~1 mm in both depth and transverse to the readout face are obtained over most of the crystal depth.

  7. Optimization of Coded Aperture Radioscintigraphy for Sentinel Lymph Node Mapping

    PubMed Central

    Fujii, Hirofumi; Idoine, John D.; Gioux, Sylvain; Accorsi, Roberto; Slochower, David R.; Lanza, Richard C.; Frangioni, John V.

    2011-01-01

    Purpose Radioscintigraphic imaging during sentinel lymph node (SLN) mapping could potentially improve localization; however, parallel-hole collimators have certain limitations. In this study, we explored the use of coded aperture (CA) collimators. Procedures Equations were derived for the six major dependent variables of CA collimators (i.e., masks) as a function of the ten major independent variables, and an optimized mask was fabricated. After validation, dual-modality CA and near-infrared (NIR) fluorescence SLN mapping was performed in pigs. Results Mask optimization required the judicious balance of competing dependent variables, resulting in sensitivity of 0.35%, XY resolution of 2.0 mm, and Z resolution of 4.2 mm at an 11.5 cm FOV. Findings in pigs suggested that NIR fluorescence imaging and CA radioscintigraphy could be complementary, but present difficult technical challenges. Conclusions This study lays the foundation for using CA collimation for SLN mapping, and also exposes several problems that require further investigation. PMID:21567254

  8. Hexagonal uniformly redundant arrays for coded-aperture imaging

    NASA Technical Reports Server (NTRS)

    Finger, M. H.; Prince, T. A.

    1985-01-01

    Uniformly redundant arrays are used in coded-aperture imaging, a technique for forming images without mirrors or lenses. The URAs constructed on hexagonal lattices, are outlined. Details are presented for the construction of a special class of URAs, the skew-Hadamard URAs, which have the following properties: (1) nearly half open and half closed (2) antisymmetric upon rotation by 180 deg except for the central cell and its repetitions. Some of the skew-Hadamard URAs constructed on a hexagonal lattice have additional symmetries. These special URAs that have a hexagonal unit pattern, and are antisymmetric upon rotation by 60 deg, called hexagonal uniformly redundant arrays (HURAs). The HURAs are particularly suited to gamma-ray imaging in high background situations. In a high background situation the best sensitivity is obtained with a half open and half closed mask. The hexagonal symmetry of an HURA is more appropriate for a round position-sensitive detector or a closed-packed array of detectors than a rectangular symmetry.

  9. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  10. Adaptive differential pulse-code modulation with adaptive bit allocation

    NASA Astrophysics Data System (ADS)

    Frangoulis, E. D.; Yoshida, K.; Turner, L. F.

    1984-08-01

    Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.

  11. Study of the asymptotic dynamic aperture in the NICA collider using symplectic tracking codes

    NASA Astrophysics Data System (ADS)

    Bolshakov, A. E.; Zenkevich, P. R.; Kozlov, O. S.

    2015-12-01

    The dependence of the dynamic aperture in the NICA collider on the number of turns has been calculated by MAD-X tracking code with the two independent algorithms: a program of symplectic tracking PTC (Polymorphic technology Tracking Code) and a program of the thin-lenses tracking method. The results of the numerical integration of particle motion forecast the asymptotic dynamic aperture and the possible losses of particles in the collider.

  12. Synthetic aperture radar automatic target recognition using adaptive boosting

    NASA Astrophysics Data System (ADS)

    Sun, Yijun; Liu, Zhipeng; Todorovic, Sinisa; Li, Jian

    2005-05-01

    We propose a novel automatic target recognition (ATR) system for classification of three types of ground vehicles in the MSTAR public release database. First, each image chip is pre-processed by extracting fine and raw feature sets, where raw features compensate for the target pose estimation error that corrupts fine image features. Then, the chips are classified by using the adaptive boosting (AdaBoost) algorithm with the radial basis function (RBF) net as the base learner. Since the RBF net is a binary classifier, we decompose our multiclass problem into a set of binary ones through the error-correcting output codes (ECOC) method, specifying a dictionary of code words for the set of three possible classes. AdaBoost combines the classification results of the RBF net for each binary problem into a code word, which is then "decoded" as one of the code words (i.e., ground-vehicle classes) in the specified dictionary. Along with classification, within the AdaBoost framework, we also conduct efficient fusion of the fine and raw image-feature vectors. The results of large-scale experiments demonstrate that our ATR scheme outperforms the state-of-the-art systems reported in the literature.

  13. Augmenting synthetic aperture radar with space time adaptive processing

    NASA Astrophysics Data System (ADS)

    Riedl, Michael; Potter, Lee C.; Ertin, Emre

    2013-05-01

    Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.

  14. Incorporating prior knowledge of urban scene spatial structure in aperture code designs for surveillance systems

    NASA Astrophysics Data System (ADS)

    Valenzuela, John R.; Thelen, Brian J.; Subotic, Nikola

    2010-08-01

    Two major missions of Surveillance systems are imaging and ground moving target indication (GMTI). Recent advances in coded aperture electro optical systems have enabled persistent surveillance systems with extremely large fields of regard. The areas of interest for these surveillance systems are typically urban, with spatial topologies having a very definite structure. We incorporate aspects of a priori information on this structure in our aperture code designs to enable optimized dealiasing operations for undersampled focal plane arrays. Our framework enables us to design aperture codes to minimize mean square error for image reconstruction or to maximize signal to clutter ratio for GMTI detection. In this paper we present a technical overview of our code design methodology and show the results of our designed codes on simulated DIRSIG mega-scene data.

  15. Snapshot 2D tomography via coded aperture x-ray scatter imaging

    PubMed Central

    MacCabe, Kenneth P.; Holmgren, Andrew D.; Tornai, Martin P.; Brady, David J.

    2015-01-01

    This paper describes a fan beam coded aperture x-ray scatter imaging system which acquires a tomographic image from each snapshot. This technique exploits cylindrical symmetry of the scattering cross section to avoid the scanning motion typically required by projection tomography. We use a coded aperture with a harmonic dependence to determine range, and a shift code to determine cross-range. Here we use a forward-scatter configuration to image 2D objects and use serial exposures to acquire tomographic video of motion within a plane. Our reconstruction algorithm also estimates the angular dependence of the scattered radiance, a step toward materials imaging and identification. PMID:23842254

  16. Optimization of a coded aperture coherent scatter spectral imaging system for medical imaging

    NASA Astrophysics Data System (ADS)

    Greenberg, Joel A.; Lakshmanan, Manu N.; Brady, David J.; Kapadia, Anuj J.

    2015-03-01

    Coherent scatter X-ray imaging is a technique that provides spatially-resolved information about the molecular structure of the material under investigation, yielding material-specific contrast that can aid medical diagnosis and inform treatment. In this study, we demonstrate a coherent-scatter imaging approach based on the use of coded apertures (known as coded aperture coherent scatter spectral imaging1, 2) that enables fast, dose-efficient, high-resolution scatter imaging of biologically-relevant materials. Specifically, we discuss how to optimize a coded aperture coherent scatter imaging system for a particular set of objects and materials, describe and characterize our experimental system, and use the system to demonstrate automated material detection in biological tissue.

  17. Order of Magnitude Signal Gain in Magnetic Sector Mass Spectrometry Via Aperture Coding

    NASA Astrophysics Data System (ADS)

    Chen, Evan X.; Russell, Zachary E.; Amsden, Jason J.; Wolter, Scott D.; Danell, Ryan M.; Parker, Charles B.; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.; Brady, David J.

    2015-09-01

    Miniaturizing instruments for spectroscopic applications requires the designer to confront a tradeoff between instrument resolution and instrument throughput [and associated signal-to-background-ratio (SBR)]. This work demonstrates a solution to this tradeoff in sector mass spectrometry by the first application of one-dimensional (1D) spatially coded apertures, similar to those previously demonstrated in optics. This was accomplished by replacing the input slit of a simple 90° magnetic sector mass spectrometer with a specifically designed coded aperture, deriving the corresponding forward mathematical model and spectral reconstruction algorithm, and then utilizing the resulting system to measure and reconstruct the mass spectra of argon, acetone, and ethanol. We expect the application of coded apertures to sector instrument designs will lead to miniature mass spectrometers that maintain the high performance of larger instruments, enabling field detection of trace chemicals and point-of-use mass spectrometry.

  18. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  19. SU-E-J-20: Adaptive Aperture Morphing for Online Correction for Prostate Cancer Radiotherapy

    SciTech Connect

    Sandhu, R; Qin, A; Yan, D

    2014-06-01

    Purpose: Online adaptive aperture morphing is desirable over translational couch shifts to accommodate not only the target position variation but also anatomic changes (rotation, deformation, and relation of target to organ-atrisks). We proposed quick and reliable method for adapting segment aperture leaves for IMRT treatment of prostate. Methods: The proposed method consists of following steps: (1) delineate the contours of prostate, SV, bladder and rectum on kV-CBCT; (2) determine prostate displacement from the rigid body registration of the contoured prostate manifested on the reference CT and the CBCT; (3) adapt the MLC segment apertures obtained from the pre-treatment IMRT planning to accommodate the shifts as well as anatomic changes. The MLC aperture adaptive algorithm involves two steps; first move the whole aperture according to prostate translational/rotational shifts, and secondly fine-tune the aperture shape to maintain the spatial relationship between the planning target contour and the MLC aperture to the daily target contour. Feasibility of this method was evaluated retrospectively on a seven-field IMRT treatment of prostate cancer patient by comparing dose volume histograms of the original plan and the aperture-adjusted plan, with/without additional segments weight optimization (SWO), on two daily treatment CBCTs selected with relative large motion and rotation. Results: For first daily treatment, the prostate rotation was significant (12degree around lateral-axis). With apertureadjusted plan, the D95 to the target was improved 25% and rectum dose (D30, D40) was reduced 20% relative to original plan on daily volumes. For second treatment-fraction, (lateral shift = 6.7mm), after adjustment target D95 improved by 3% and bladder dose (D30, maximum dose) was reduced by 1%. For both cases, extra SWO did not provide significant improvement. Conclusion: The proposed method of adapting segment apertures is promising in treatment position correction

  20. Single-shot stand-off detection of explosives precursors using UV coded aperture Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Svanqvist, M.; Nordberg, M.; Östmark, H.

    2015-05-01

    We present preliminary results on the performance of a basic stand-off Raman spectroscopy setup using coded apertures compared to a setup using a round-to-slit fiber for light collection. Measurements were performed using single 5 ns laser shots at 355 nm with a target distance of 5.4 meters on ammonium nitrate powder. The results show an increase in signal-to-noise ratio of 3-8 times when using coded aperture multiplexing compared to the fiber setup.

  1. A systematic investigation of large-scale diffractive coded aperture designs

    NASA Astrophysics Data System (ADS)

    Gottesman, Stephen R.; Shrekenhamer, Abraham; Isser, Abraham; Gigioli, George

    2012-10-01

    One obstacle to optimizing performance of large-scale coded aperture systems operating in the diffractive regime has been the lack of a robust, rapid, and efficient method for generating diffraction patterns that are projected by the system onto the focal plane. We report on the use of the 'Shrekenhamer Transform' for a systematic investigation of various types of coded aperture designs operating in the diffractive mode. Each design is evaluated in terms of its autocorrelation function for potential use in future imaging applications. The motivation of our study is to gain insight into more efficient optimization methods of image reconstruction algorithms.

  2. A new pad-based neutron detector for stereo coded aperture thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.

    2014-09-01

    A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.

  3. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  4. Experimental implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2015-03-01

    A fast and accurate scatter imaging technique to differentiate cancerous and healthy breast tissue is introduced in this work. Such a technique would have wide-ranging clinical applications from intra-operative margin assessment to breast cancer screening. Coherent Scatter Computed Tomography (CSCT) has been shown to differentiate cancerous from healthy tissue, but the need to raster scan a pencil beam at a series of angles and slices in order to reconstruct 3D images makes it prohibitively time consuming. In this work we apply the coded aperture coherent scatter spectral imaging technique to reconstruct 3D images of breast tissue samples from experimental data taken without the rotation usually required in CSCT. We present our experimental implementation of coded aperture scatter imaging, the reconstructed images of the breast tissue samples and segmentations of the 3D images in order to identify the cancerous and healthy tissue inside of the samples. We find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside of them. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside of ex vivo samples within a time on the order of a minute.

  5. 3-D localization of gamma ray sources with coded apertures for medical applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.

    2015-09-01

    Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.

  6. Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems

    SciTech Connect

    Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter

    2010-01-01

    From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions

  7. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-01

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale. PMID:24470413

  8. Source-Search Sensitivity of a Large-Area, Coded-Aperture, Gamma-Ray Imager

    SciTech Connect

    Ziock, K P; Collins, J W; Craig, W W; Fabris, L; Lanza, R C; Gallagher, S; Horn, B P; Madden, N W; Smith, E; Woodring, M L

    2004-10-27

    We have recently completed a large-area, coded-aperture, gamma-ray imager for use in searching for radiation sources. The instrument was constructed to verify that weak point sources can be detected at considerable distances if one uses imaging to overcome fluctuations in the natural background. The instrument uses a rank-19, one-dimensional coded aperture to cast shadow patterns onto a 0.57 m{sup 2} NaI(Tl) detector composed of 57 individual cubes each 10 cm on a side. These are arranged in a 19 x 3 array. The mask is composed of four-centimeter thick, one-meter high, 10-cm wide lead blocks. The instrument is mounted in the back of a small truck from which images are obtained as one drives through a region. Results of first measurements obtained with the system are presented.

  9. Studies of coded aperture gamma-ray optics using an Anger camera

    NASA Astrophysics Data System (ADS)

    Charalambous, P. M.; Dean, A. J.; Stephen, J. B.; Young, N. G. S.; Gourlay, A. R.

    1983-08-01

    An experimental arrangement using an Anger camera as a position-sensitive focal plane, in conjunction with a series of coded aperture masks, has been employed to generate laboratory gamma-ray images. These tests were designed to investigate quantitatively a number of potential aberrations present in any practicable imaging system. It is shown that, by proper design, the major sources of image defects may be reduced to a level compatible with the production of good quality gamma-ray sky images.

  10. Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application

    DOEpatents

    Fenimore, E.E.

    1980-09-26

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.

  11. Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Morris, Robert E.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-03-01

    It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.

  12. Optimizing the search for high-z GRBs:. the JANUS X-ray coded aperture telescope

    NASA Astrophysics Data System (ADS)

    Burrows, D. N.; Fox, D.; Palmer, D.; Romano, P.; Mangano, V.; La Parola, V.; Falcone, A. D.; Roming, P. W. A.

    We discuss the optimization of gamma-ray burst (GRB) detectors with a goal of maximizing the detected number of bright high-redshift GRBs, in the context of design studies conducted for the X-ray transient detector on the JANUS mission. We conclude that the optimal energy band for detection of high-z GRBs is below about 30 keV. We considered both lobster-eye and coded aperture designs operating in this energy band. Within the available mass and power constraints, we found that the coded aperture mask was preferred for the detection of high-z bursts with bright enough afterglows to probe galaxies in the era of the Cosmic Dawn. This initial conclusion was confirmed through detailed mission simulations that found that the selected design (an X-ray Coded Aperture Telescope) would detect four times as many bright, high-z GRBs as the lobster-eye design we considered. The JANUS XCAT instrument will detect 48 GRBs with z>5 and fluence S_x > 3 × 10-7 erg cm-2 in a two year mission.

  13. Lensless coded-aperture imaging with separable Doubly-Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2015-02-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.

  14. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  15. Detection optimization using linear systems analysis of a coded aperture laser sensor system

    SciTech Connect

    Gentry, S.M.

    1994-09-01

    Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.

  16. SAGE - MULTIDIMENSIONAL SELF-ADAPTIVE GRID CODE

    NASA Technical Reports Server (NTRS)

    Davies, C. B.

    1994-01-01

    SAGE, Self Adaptive Grid codE, is a flexible tool for adapting and restructuring both 2D and 3D grids. Solution-adaptive grid methods are useful tools for efficient and accurate flow predictions. In supersonic and hypersonic flows, strong gradient regions such as shocks, contact discontinuities, shear layers, etc., require careful distribution of grid points to minimize grid error and produce accurate flow-field predictions. SAGE helps the user obtain more accurate solutions by intelligently redistributing (i.e. adapting) the original grid points based on an initial or interim flow-field solution. The user then computes a new solution using the adapted grid as input to the flow solver. The adaptive-grid methodology poses the problem in an algebraic, unidirectional manner for multi-dimensional adaptations. The procedure is analogous to applying tension and torsion spring forces proportional to the local flow gradient at every grid point and finding the equilibrium position of the resulting system of grid points. The multi-dimensional problem of grid adaption is split into a series of one-dimensional problems along the computational coordinate lines. The reduced one dimensional problem then requires a tridiagonal solver to find the location of grid points along a coordinate line. Multi-directional adaption is achieved by the sequential application of the method in each coordinate direction. The tension forces direct the redistribution of points to the strong gradient region. To maintain smoothness and a measure of orthogonality of grid lines, torsional forces are introduced that relate information between the family of lines adjacent to one another. The smoothness and orthogonality constraints are direction-dependent, since they relate only the coordinate lines that are being adapted to the neighboring lines that have already been adapted. Therefore the solutions are non-unique and depend on the order and direction of adaption. Non-uniqueness of the adapted grid is

  17. The use of an active coded aperture for improved directional measurements in high energy gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Johansson, A.; Beron, B. L.; Campbell, L.; Eichler, R.; Hofstadter, R.; Hughes, E. B.; Wilson, S.; Gorodetsky, P.

    1980-01-01

    The coded aperture, a refinement of the scatter-hole camera, offers a method for the improved measurement of gamma-ray direction in gamma-ray astronomy. Two prototype coded apertures have been built and tested. The more recent of these has 128 active elements of the heavy scintillator BGO. Results of tests for gamma-rays in the range 50-500 MeV are reported and future application in space discussed.

  18. Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography.

    PubMed

    Accorsi, Roberto

    2005-10-01

    Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging. PMID:16231793

  19. Analytic derivation of the longitudinal component of the three-dimensional point-spread function in coded-aperture laminography

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto

    2005-10-01

    Near-field coded-aperture data from a single view contain information useful for three-dimensional (3D) reconstruction. A common approach is to reconstruct the 3D image one plane at a time. An analytic expression is derived for the 3D point-spread function of coded-aperture laminography. Comparison with computer simulations and experiments for apertures with different size, pattern, and pattern family shows good agreement in all cases considered. The expression is discussed in the context of the completeness conditions for projection data and is applied to explain an example of nonlinear behavior inherent in 3D laminographic imaging.

  20. CAFNA{reg{underscore}sign}, coded aperture fast neutron analysis for contraband detection: Preliminary results

    SciTech Connect

    Zhang, L.; Lanza, R.C.

    1999-12-01

    The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low. For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.

  1. Large Coded Aperture Mask for Spaceflight Hard X-ray Images

    NASA Technical Reports Server (NTRS)

    Vigneau, Danielle N.; Robinson, David W.

    2002-01-01

    The 2.6 square meter coded aperture mask is a vital part of the Burst Alert Telescope on the Swift mission. A random, but known pattern of more than 50,000 lead tiles, each 5 mm square, was bonded to a large honeycomb panel which projects a shadow on the detector array during a gamma ray burst. A two-year development process was necessary to explore ideas, apply techniques, and finalize procedures to meet the strict requirements for the coded aperture mask. Challenges included finding a honeycomb substrate with minimal gamma ray attenuation, selecting an adhesive with adequate bond strength to hold the tiles in place but soft enough to allow the tiles to expand and contract without distorting the panel under large temperature gradients, and eliminating excess adhesive from all untiled areas. The largest challenge was to find an efficient way to bond the > 50,000 lead tiles to the panel with positional tolerances measured in microns. In order to generate the desired bondline, adhesive was applied and allowed to cure to each tile. The pre-cured tiles were located in a tool to maintain positional accuracy, wet adhesive was applied to the panel, and it was lowered to the tile surface with synchronized actuators. Using this procedure, the entire tile pattern was transferred to the large honeycomb panel in a single bond. The pressure for the bond was achieved by enclosing the entire system in a vacuum bag. Thermal vacuum and acoustic tests validated this approach. This paper discusses the methods, materials, and techniques used to fabricate this very large and unique coded aperture mask for the Swift mission.

  2. A coded-aperture technique allowing x-ray phase contrast imaging with conventional sources

    SciTech Connect

    Olivo, Alessandro; Speller, Robert

    2007-08-13

    Phase contrast imaging (PCI) solves the basic limitation of x-ray imaging, i.e., poor image contrast resulting from small absorption differences. Up to now, it has been mostly limited to synchrotron radiation facilities, due to the stringent requirements on the x-ray source and detectors, and only one technique was shown to provide PCI images with conventional sources but with limits in practical implementation. The authors propose a different approach, based on coded apertures, which provides high PCI signals with conventional sources and detectors and imposes practically no applicability limits. They expect this method to cast the basis of a widespread diffusion of PCI.

  3. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  4. Snapshot full-volume coded aperture x-ray diffraction tomography

    NASA Astrophysics Data System (ADS)

    Greenberg, Joel A.; Brady, David J.

    2016-05-01

    X-ray diffraction tomography (XRDT) is a well-established technique that makes it possible to identify the material composition of an object throughout its volume. We show that using coded apertures to structure the measured scatter signal gives rise to a family of imaging architectures than enables snapshot XRDT in up to 4-dimensions. We consider pencil, fan, and cone beam snapshot XRDT and show results from both experimental and simulation-based studies. We find that, while lower-dimensional systems typically result in higher imaging fidelity, higher-dimensional systems can perform adequately for a specific task at orders of magnitude faster scan times.

  5. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  6. Coded aperture correlation holography-a new type of incoherent digital holograms.

    PubMed

    Vijayakumar, A; Kashter, Yuval; Kelner, Roy; Rosen, Joseph

    2016-05-30

    We propose and demonstrate a new concept of incoherent digital holography termed coded aperture correlation holography (COACH). In COACH, the hologram of an object is formed by the interference of light diffracted from the object, with light diffracted from the same object, but that passes through a coded phase mask (CPM). Another hologram is recorded for a point object, under identical conditions and with the same CPM. This hologram is called the point spread function (PSF) hologram. The reconstructed image is obtained by correlating the object hologram with the PSF hologram. The image reconstruction of multiplane object using COACH was compared with that of other equivalent imaging systems, and has been found to possess a higher axial resolution compared to Fresnel incoherent correlation holography. PMID:27410157

  7. Coded aperture x-ray diffraction imaging with transmission computed tomography side-information

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.

    2016-03-01

    Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.

  8. Coded apertures allow high-energy x-ray phase contrast imaging with laboratory sources

    NASA Astrophysics Data System (ADS)

    Ignatyev, K.; Munro, P. R. T.; Chana, D.; Speller, R. D.; Olivo, A.

    2011-07-01

    This work analyzes the performance of the coded-aperture based x-ray phase contrast imaging approach, showing that it can be used at high x-ray energies with acceptable exposure times. Due to limitations in the used source, we show images acquired at tube voltages of up to 100 kVp, however, no intrinsic reason indicates that the method could not be extended to even higher energies. In particular, we show quantitative agreement between the contrast extracted from the experimental x-ray images and the theoretical one, determined by the behavior of the material's refractive index as a function of energy. This proves that all energies in the used spectrum contribute to the image formation, and also that there are no additional factors affecting image contrast as the x-ray energy is increased. We also discuss the method flexibility by displaying and analyzing the first set of images obtained while varying the relative displacement between coded-aperture sets, which leads to image variations to some extent similar to those observed when changing the crystal angle in analyzer-based imaging. Finally, we discuss the method's possible advantages in terms of simplification of the set-up, scalability, reduced exposure times, and complete achromaticity. We believe this would helpful in applications requiring the imaging of highly absorbing samples, e.g., material science and security inspection, and, in the way of example, we demonstrate a possible application in the latter.

  9. High-resolution, high sensitivity detectors for molecular imaging with radionuclides: The coded aperture option

    NASA Astrophysics Data System (ADS)

    Cusanno, F.; Cisbani, E.; Colilli, S.; Fratoni, R.; Garibaldi, F.; Giuliani, F.; Gricia, M.; Lo Meo, S.; Lucentini, M.; Magliozzi, M. L.; Santavenere, F.; Lanza, R. C.; Majewski, S.; Cinti, M. N.; Pani, R.; Pellegrini, R.; Orsini Cancelli, V.; De Notaristefani, F.; Bollini, D.; Navarria, F.; Moschini, G.

    2006-12-01

    Molecular imaging with radionuclides is a very sensitive technique because it allows to obtain images with nanomolar or picomolar concentrations. This has generated a rapid growth of interest in radionuclide imaging of small animals. Indeed radiolabeling of small molecules, antibodies, peptides and probes for gene expression enables molecular imaging in vivo, but only if a suitable imaging system is used. Detecting small tumors in humans is another important application of such techniques. In single gamma imaging, there is always a well known tradeoff between spatial resolution and sensitivity due to unavoidable collimation requirements. Limitation of the sensitivity due to collimation is well known and affects the performance of imaging systems, especially if only radiopharmaceuticals with limited uptake are available. In many cases coded aperture collimation can provide a solution, if the near field artifact effect can be eliminated or limited. At least this is the case for "small volumes" imaging, involving small animals. In this paper 3D-laminography simulations and preliminary measurements with coded aperture collimation are presented. Different masks have been designed for different applications showing the advantages of the technique in terms of sensitivity and spatial resolution. The limitations of the technique are also discussed.

  10. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  11. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  12. Source size and temporal coherence requirements of coded aperture type x-ray phase contrast imaging systems.

    PubMed

    Munro, Peter R T; Ignatyev, Konstantin; Speller, Robert D; Olivo, Alessandro

    2010-09-13

    There is currently much interest in developing X-ray Phase Contrast Imaging (XPCI) systems which employ laboratory sources in order to deploy the technique in real world applications. The challenge faced by nearly all XPCI techniques is that of efficiently utilising the x-ray flux emitted by an x-ray tube which is polychromatic and possesses only partial spatial coherence. Techniques have, however, been developed which overcome these limitations. Such a technique, known as coded aperture XPCI, has been under development in our laboratories in recent years for application principally in medical imaging and security screening. In this paper we derive limitations imposed upon source polychromaticity and spatial extent by the coded aperture system. We also show that although other grating XPCI techniques employ a different physical principle, they satisfy design constraints similar to those of the coded aperture XPCI. PMID:20940863

  13. Sensitivity of coded aperture Raman spectroscopy to analytes beneath turbid biological tissue and tissue-simulating phantoms

    PubMed Central

    Maher, Jason R.; Matthews, Thomas E.; Reid, Ashley K.; Katz, David F.; Wax, Adam

    2014-01-01

    Abstract. Traditional slit-based spectrometers have an inherent trade-off between spectral resolution and throughput that can limit their performance when measuring diffuse sources such as light returned from highly scattering biological tissue. Recently, multielement fiber bundles have been used to effectively measure diffuse sources, e.g., in the field of spatially offset Raman spectroscopy, by remapping the source (or some region of the source) into a slit shape for delivery to the spectrometer. Another approach is to change the nature of the instrument by using a coded entrance aperture, which can increase throughput without sacrificing spectral resolution. In this study, two spectrometers, one with a slit-based entrance aperture and the other with a coded aperture, were used to measure Raman spectra of an analyte as a function of the optical properties of an overlying scattering medium. Power-law fits reveal that the analyte signal is approximately proportional to the number of transport mean free paths of the scattering medium raised to a power of −0.47 (coded aperture instrument) or −1.09 (slit-based instrument). These results demonstrate that the attenuation in signal intensity is more pronounced for the slit-based instrument and highlight the scattering regimes where coded aperture instruments can provide an advantage over traditional slit-based spectrometers. PMID:25371979

  14. Design of Pel Adaptive DPCM coding based upon image partition

    NASA Astrophysics Data System (ADS)

    Saitoh, T.; Harashima, H.; Miyakawa, H.

    1982-01-01

    A Pel Adaptive DPCM coding system based on image partition is developed which possesses coding characteristics superior to those of the Block Adaptive DPCM coding system. This method uses multiple DPCM coding loops and nonhierarchical cluster analysis. It is found that the coding performances of the Pel Adaptive DPCM coding method differ depending on the subject images. The Pel Adaptive DPCM designed using these methods is shown to yield a maximum performance advantage of 2.9 dB for the Girl and Couple images and 1.5 dB for the Aerial image, although no advantage was obtained for the moon image. These results show an improvement over the optimally designed Block Adaptive DPCM coding method proposed by Saito et al. (1981).

  15. Computer vision for detecting and quantifying gamma-ray sources in coded-aperture images

    SciTech Connect

    Schaich, P.C.; Clark, G.A.; Sengupta, S.K.; Ziock, K.P.

    1994-11-02

    The authors report the development of an automatic image analysis system that detects gamma-ray source regions in images obtained from a coded aperture, gamma-ray imager. The number of gamma sources in the image is not known prior to analysis. The system counts the number (K) of gamma sources detected in the image and estimates the lower bound for the probability that the number of sources in the image is K. The system consists of a two-stage pattern classification scheme in which the Probabilistic Neural Network is used in the supervised learning mode. The algorithms were developed and tested using real gamma-ray images from controlled experiments in which the number and location of depleted uranium source disks in the scene are known.

  16. Design criteria for small coded aperture masks in gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Sembay, S.; Gehrels, Neil

    1990-01-01

    Most theoretical work on coded aperture masks in X-ray and low-energy gamma-ray astronomy has concentrated on masks with large numbers of elements. For gamma-ray spectrometers in the MeV range, the detector plane usually has only a few discrete elements, so that masks with small numbers of elements are called for. For this case it is feasible to analyze by computer all the possible mask patterns of given dimension to find the ones that best satisfy the desired performance criteria. A particular set of performance criteria for comparing the flux sensitivities, source positioning accuracies and transparencies of different mask patterns is developed. The results of such a computer analysis for masks up to dimension 5 x 5 unit cell are presented and it is concluded that there is a great deal of flexibility in the choice of mask pattern for each dimension.

  17. Gamma ray imaging using coded aperture masks: a computer simulation approach.

    PubMed

    Jimenez, J; Olmos, P; Pablos, J L; Perez, J M

    1991-02-10

    The gamma-ray imaging using coded aperture masks as focusing elements is an extended technique for static position sensitive detectors. Several transfer functions have been proposed to implement mathematically the set of holes in the mask, the uniformly redundant array collimator being the most popular design. A considerable amount of work has been done to improve the digital methods to deconvolve the gamma-ray image, formed at the detector plane, with this transfer function. Here we present a study of the behavior of these techniques when applied to the geometric shadows produced by a set of point emitters. Comparison of the shape of the object reconstructed from these shadows with that resulting from the analytical reconstruction is performed, defining the validity ranges of the usual algorithmic approximations reported in the literature. Finally, several improvements are discussed. PMID:20582025

  18. The laser linewidth effect on the image quality of phase coded synthetic aperture ladar

    NASA Astrophysics Data System (ADS)

    Cai, Guangyu; Hou, Peipei; Ma, Xiaoping; Sun, Jianfeng; Zhang, Ning; Li, Guangyuan; Zhang, Guo; Liu, Liren

    2015-12-01

    The phase coded (PC) waveform in synthetic aperture ladar (SAL) outperforms linear frequency modulated (LFM) signal in lower side lobe, shorter pulse duration and making the rigid control of the chirp starting point in every pulse unnecessary. Inherited from radar PC waveform and strip map SAL, the backscattered signal of a point target in PC SAL was listed and the two dimensional match filtering algorithm was introduced to focus a point image. As an inherent property of laser, linewidth is always detrimental to coherent ladar imaging. With the widely adopted laser linewidth model, the effect of laser linewidth on SAL image quality was theoretically analyzed and examined via Monte Carlo simulation. The research gives us a clear view of how to select linewidth parameters in the future PC SAL systems.

  19. Evaluation of the cosmic-ray induced background in coded aperture high energy gamma-ray telescopes

    NASA Technical Reports Server (NTRS)

    Owens, Alan; Barbier, Loius M.; Frye, Glenn M.; Jenkins, Thomas L.

    1991-01-01

    While the application of coded-aperture techniques to high-energy gamma-ray astronomy offers potential arc-second angular resolution, concerns were raised about the level of secondary radiation produced in a thick high-z mask. A series of Monte-Carlo calculations are conducted to evaluate and quantify the cosmic-ray induced neutral particle background produced in a coded-aperture mask. It is shown that this component may be neglected, being at least a factor of 50 lower in intensity than the cosmic diffuse gamma-rays.

  20. Feasibility testing of a pre-clinical coded aperture phase contrast imaging configuration using a simple fast Monte Carlo simulator

    PubMed Central

    Kavanagh, Anthony; Olivo, Alessandro; Speller, Robert; Vojnovic, Borivoj

    2013-01-01

    A simple method of simulating possible coded aperture phase contrast X-ray imaging apparatus is presented. The method is based on ray tracing, with the rays treated ballistically within a voxelized sample and with the phase-shift-induced angular deviations and absorptions applied at a plane in the middle of the sample. For the particular case of a coded aperture phase contrast configuration suitable for small animal pre-clinical imaging we present results obtained using a high resolution voxel array representation of a mathematically-defined ‘digital’ mouse. At the end of the article a link to the software is supplied. PMID:24466479

  1. Compatibility of Spatially Coded Apertures with a Miniature Mattauch-Herzog Mass Spectrograph

    NASA Astrophysics Data System (ADS)

    Russell, Zachary E.; DiDona, Shane T.; Amsden, Jason J.; Parker, Charles B.; Kibelka, Gottfried; Gehm, Michael E.; Glass, Jeffrey T.

    2016-04-01

    In order to minimize losses in signal intensity often present in mass spectrometry miniaturization efforts, we recently applied the principles of spatially coded apertures to magnetic sector mass spectrometry, thereby achieving increases in signal intensity of greater than 10× with no loss in mass resolution Chen et al. (J. Am. Soc. Mass Spectrom. 26, 1633-1640, 2015), Russell et al. (J. Am. Soc. Mass Spectrom. 26, 248-256, 2015). In this work, we simulate theoretical compatibility and demonstrate preliminary experimental compatibility of the Mattauch-Herzog mass spectrograph geometry with spatial coding. For the simulation-based theoretical assessment, COMSOL Multiphysics finite element solvers were used to simulate electric and magnetic fields, and a custom particle tracing routine was written in C# that allowed for calculations of more than 15 million particle trajectory time steps per second. Preliminary experimental results demonstrating compatibility of spatial coding with the Mattauch-Herzog geometry were obtained using a commercial miniature mass spectrograph from OI Analytical/Xylem.

  2. Compatibility of Spatially Coded Apertures with a Miniature Mattauch-Herzog Mass Spectrograph.

    PubMed

    Russell, Zachary E; DiDona, Shane T; Amsden, Jason J; Parker, Charles B; Kibelka, Gottfried; Gehm, Michael E; Glass, Jeffrey T

    2016-04-01

    In order to minimize losses in signal intensity often present in mass spectrometry miniaturization efforts, we recently applied the principles of spatially coded apertures to magnetic sector mass spectrometry, thereby achieving increases in signal intensity of greater than 10× with no loss in mass resolution Chen et al. (J. Am. Soc. Mass Spectrom. 26, 1633-1640, 2015), Russell et al. (J. Am. Soc. Mass Spectrom. 26, 248-256, 2015). In this work, we simulate theoretical compatibility and demonstrate preliminary experimental compatibility of the Mattauch-Herzog mass spectrograph geometry with spatial coding. For the simulation-based theoretical assessment, COMSOL Multiphysics finite element solvers were used to simulate electric and magnetic fields, and a custom particle tracing routine was written in C# that allowed for calculations of more than 15 million particle trajectory time steps per second. Preliminary experimental results demonstrating compatibility of spatial coding with the Mattauch-Herzog geometry were obtained using a commercial miniature mass spectrograph from OI Analytical/Xylem. PMID:26744293

  3. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  4. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered. PMID:26974799

  5. Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.

    PubMed

    Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J

    2014-06-01

    We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made. PMID:24921545

  6. Requirements for imaging vulnerable plaque in the coronary artery using a coded aperture imaging system

    NASA Astrophysics Data System (ADS)

    Tozian, Cynthia

    A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved

  7. Piecewise spectrally band-pass for compressive coded aperture spectral imaging

    NASA Astrophysics Data System (ADS)

    Qian, Lu-Lu; Lü, Qun-Bo; Huang, Min; Xiang, Li-Bin

    2015-08-01

    Coded aperture snapshot spectral imaging (CASSI) has been discussed in recent years. It has the remarkable advantages of high optical throughput, snapshot imaging, etc. The entire spatial-spectral data-cube can be reconstructed with just a single two-dimensional (2D) compressive sensing measurement. On the other hand, for less spectrally sparse scenes, the insufficiency of sparse sampling and aliasing in spatial-spectral images reduce the accuracy of reconstructed three-dimensional (3D) spectral cube. To solve this problem, this paper extends the improved CASSI. A band-pass filter array is mounted on the coded mask, and then the first image plane is divided into some continuous spectral sub-band areas. The entire 3D spectral cube could be captured by the relative movement between the object and the instrument. The principle analysis and imaging simulation are presented. Compared with peak signal-to-noise ratio (PSNR) and the information entropy of the reconstructed images at different numbers of spectral sub-band areas, the reconstructed 3D spectral cube reveals an observable improvement in the reconstruction fidelity, with an increase in the number of the sub-bands and a simultaneous decrease in the number of spectral channels of each sub-band. Project supported by the National Natural Science Foundation for Distinguished Young Scholars of China (Grant No. 61225024) and the National High Technology Research and Development Program of China (Grant No. 2011AA7012022).

  8. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  9. Adaptation of gasdynamical codes to the modern supercomputers

    NASA Astrophysics Data System (ADS)

    Kaygorodov, P. V.

    2016-02-01

    During last decades, supercomputer architecture has changed significantly and now it is impossible to achieve a peak performance without an adaptation of the numerical codes to modern supercomputer architectures. In this paper, I want to share my experience in adaptation of astrophysics gasdynamical numerical codes to multi-node computing clusters with multi-CPU and multi-GPU nodes.

  10. Adaptation of bit error rate by coding

    NASA Astrophysics Data System (ADS)

    Marguinaud, A.; Sorton, G.

    1984-07-01

    The use of coding in spacecraft wideband communication to reduce power transmission, save bandwith, and lower antenna specifications was studied. The feasibility of a coder decoder functioning at a bit rate of 10 Mb/sec with a raw bit error rate (BER) of 0.001 and an output BER of 0.000000001 is demonstrated. A single block code protection, and two coding levels protection are examined. A single level protection BCH code with 5 errors correction capacity, 16% redundancy, and interleaving depth 4 giving a coded block of 1020 bits is simple to implement, but has BER = 0.000000007. A single level BCH code with 7 errors correction capacity and 12% redundancy meets specifications, but is more difficult to implement. Two level protection with 9% BCH outer and 10% BCH inner codes, both levels with 3 errors correction capacity and 8% redundancy for a coded block of 7050 bits is the most complex, but offers performance advantages.

  11. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    SciTech Connect

    Joshi, S; Kaye, W; Jaworski, J; He, Z

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinhole camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for

  12. Performance of a Fieldable Large-Area, Coded-Aperture, Gamma Imager

    SciTech Connect

    Habte Ghebretatios, Frezghi; Cunningham, Mark F; Fabris, Lorenzo; Ziock, Klaus-Peter

    2007-01-01

    We recently developed a fieldable large-area, coded-aperture, gamma imager (the Large Area Imager - LAI). The instrument was developed to detect weak radiation sources in a fluctuating natural background. Ideally, the efficacy of the instrument is determined using receiver-operator statistics generated from measurement data in terms of probability of detection versus probability of false alarm. However, due to the impracticality of hiding many sources in public areas, it is difficult to measure the data required to generate receiver-operator characteristic (ROC) curves. Instead, we develop a high statistics "model source" from measurements of a real point source and then inject the model source into data collected from the world at large where, presumably, no source exists. In this paper we have applied this "source injection" technique to evaluate the performance of the LAI. We plotted ROC curves obtained for different source locations from the imager and for different source strengths when the source is injected at 50 m from the imager. The result shows that this prototype instrument provides excellent performance for a 1-mCi source at a distance of 50 m from the imager in a single pass at 25 mph.

  13. Broadband chirality-coded meta-aperture for photon-spin resolving.

    PubMed

    Du, Luping; Kou, Shan Shan; Balaur, Eugeniu; Cadusch, Jasper J; Roberts, Ann; Abbey, Brian; Yuan, Xiao-Cong; Tang, Dingyuan; Lin, Jiao

    2015-01-01

    The behaviour of light transmitted through an individual subwavelength aperture becomes counterintuitive in the presence of surrounding 'decoration', a phenomenon known as the extraordinary optical transmission. Despite being polarization-sensitive, such an individual nano-aperture, however, often cannot differentiate between the two distinct spin-states of photons because of the loss of photon information on light-aperture interaction. This creates a 'blind-spot' for the aperture with respect to the helicity of chiral light. Here we report the development of a subwavelength aperture embedded with metasurfaces dubbed a 'meta-aperture', which breaks this spin degeneracy. By exploiting the phase-shaping capabilities of metasurfaces, we are able to create specific meta-apertures in which the pair of circularly polarized light spin-states produces opposite transmission spectra over a broad spectral range. The concept incorporating metasurfaces with nano-apertures provides a venue for exploring new physics on spin-aperture interaction and potentially has a broad range of applications in spin-optoelectronics and chiral sensing. PMID:26628047

  14. Broadband chirality-coded meta-aperture for photon-spin resolving

    PubMed Central

    Du, Luping; Kou, Shan Shan; Balaur, Eugeniu; Cadusch, Jasper J.; Roberts, Ann; Abbey, Brian; Yuan, Xiao-Cong; Tang, Dingyuan; Lin, Jiao

    2015-01-01

    The behaviour of light transmitted through an individual subwavelength aperture becomes counterintuitive in the presence of surrounding ‘decoration', a phenomenon known as the extraordinary optical transmission. Despite being polarization-sensitive, such an individual nano-aperture, however, often cannot differentiate between the two distinct spin-states of photons because of the loss of photon information on light-aperture interaction. This creates a ‘blind-spot' for the aperture with respect to the helicity of chiral light. Here we report the development of a subwavelength aperture embedded with metasurfaces dubbed a ‘meta-aperture', which breaks this spin degeneracy. By exploiting the phase-shaping capabilities of metasurfaces, we are able to create specific meta-apertures in which the pair of circularly polarized light spin-states produces opposite transmission spectra over a broad spectral range. The concept incorporating metasurfaces with nano-apertures provides a venue for exploring new physics on spin-aperture interaction and potentially has a broad range of applications in spin-optoelectronics and chiral sensing. PMID:26628047

  15. An Adaptive Code for Radial Stellar Model Pulsations

    NASA Astrophysics Data System (ADS)

    Buchler, J. Robert; Kolláth, Zoltán; Marom, Ariel

    1997-09-01

    We describe an implicit 1-D adaptive mesh hydrodynamics code that is specially tailored for radial stellar pulsations. In the Lagrangian limit the code reduces to the well tested Fraley scheme. The code has the useful feature that unwanted, long lasting transients can be avoided by smoothly switching on the adaptive mesh features starting from the Lagrangean code. Thus, a limit cycle pulsation that can readily be computed with the relaxation method of Stellingwerf will converge in a few tens of pulsation cycles when put into the adaptive mesh code. The code has been checked with two shock problems, viz. Noh and Sedov, for which analytical solutions are known, and it has been found to be both accurate and stable. Superior results were obtained through the solution of the total energy (gravitational + kinetic + internal) equation rather than that of the internal energy only.

  16. Study on extremizing adaptive systems and applications to synthetic aperture radars

    NASA Astrophysics Data System (ADS)

    Politis, D. T.

    1983-05-01

    Klopf's work on the functioning of the neuron was studied and critically examined for engineering application possibilities. Similarly, Barto's work on the implementation of Klopf's ideas in computer simulated nets/systems was studied to determine if it could provide suitable models for physical systems. The latest learning system investigated by Barto, described as "Learning with an Adaptive Critic' was considered as the most promising for engineering applications. A functional engineering model of that system has been developed and its dynamic behavior of this system is currently being investigated in order to improve our understanding of the system operation and potential applications. In parallel with this study we are looking for possible application of such learning systems in synthetic aperture radars and data exploitation. Several potential applications have already been suggested. These suggestions will be further explored and the most promising will be proposed for full investigation and possible implementation.

  17. Limiting sensitivities of coded-aperture telescopes for gamma-ray astronomy: Balloon-Borne fixed-mask systems

    NASA Astrophysics Data System (ADS)

    Owens, Alan

    1990-07-01

    The limiting sensitivities of coded-aperture imaging telescopes employing fixed masks are derived for continuum and line emission from cosmic point sources. The sensitivities are calculated for a single-source observation and do not take into consideration the many advantages offered by a multiplex system; for instance, low susceptibility to secular background changes and the ability to observe more than one source during an observation period. For the nuclear transition energy region, it is shown that the utilization of a coded-aperture mask by a particular detection system does not significantly degrade its performance relative to conventional, sequential scanning instruments. It is further shown that for short source observation times (e.g., typical of those obtained from stratospheric balloons), the coded-aperture imaging technique can be particularly advantageous. The effects of a non-uniform instrumental background on the imaging process are discussed and a correction procedure suggested. It is found that by careful planning of the observing program coupled with a stable instrument design, image degradation due to background non-uniformities can be made arbitrarily small and the resulting performance made to approach that predicted for an equivalent mask-antimask system. Present address: Laboratory for High Energy Astrophysics, Code 661, NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA.

  18. Results of investigation of adaptive speech codes

    NASA Astrophysics Data System (ADS)

    Nekhayev, A. L.; Pertseva, V. A.; Sitnyakovskiy, I. V.

    1984-06-01

    A search for ways of increasing the effectiveness of speech signals in digital form lead to the appearance of various methods of encoding, to reduce the excessiveness of specific properties of the speech signal. It is customary to divide speech codes into two large classes: codes of signal parameters (or vocoders), and codes of the signal form (CSF. In telephony, preference is given to a second class of systems, which maintains naturalness of sound. The class of CSF expanded considerably because of the development of codes based on the frequency representation of a signal. The greatest interest is given to such methods of encoding as pulse modulation (PCM), differential PCM (DPCM), and delta modulation (DM). However, developers of digital transmission systems find it difficult to compile a complete pattern of the applicability of specific types of codes. The best known versions of the codes are evaluated by means of subjective-statistical measurements of their characteristics. The results obtained help developers to draw conclusions regarding the applicability of the codes considered in various communication systems.

  19. Self characterization of a coded aperture array for neutron source imaging.

    PubMed

    Volegov, P L; Danly, C R; Fittinghoff, D N; Guler, N; Merrill, F E; Wilde, C H

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF. PMID:25554292

  20. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  1. Self characterization of a coded aperture array for neutron source imaging

    SciTech Connect

    Volegov, P. L. Danly, C. R.; Guler, N.; Merrill, F. E.; Wilde, C. H.; Fittinghoff, D. N.

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (∼100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  2. GPU-based ultra-fast direct aperture optimization for online adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Jia, Xun; Jiang, Steve B.

    2010-08-01

    Online adaptive radiation therapy (ART) has great promise to significantly reduce normal tissue toxicity and/or improve tumor control through real-time treatment adaptations based on the current patient anatomy. However, the major technical obstacle for clinical realization of online ART, namely the inability to achieve real-time efficiency in treatment re-planning, has yet to be solved. To overcome this challenge, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) direct aperture optimization (DAO) algorithm on the graphics processing unit (GPU) based on our previous work on the CPU. We formulate the DAO problem as a large-scale convex programming problem, and use an exact method called the column generation approach to deal with its extremely large dimensionality on the GPU. Five 9-field prostate and five 5-field head-and-neck IMRT clinical cases with 5 × 5 mm2 beamlet size and 2.5 × 2.5 × 2.5 mm3 voxel size were tested to evaluate our algorithm on the GPU. It takes only 0.7-3.8 s for our implementation to generate high-quality treatment plans on an NVIDIA Tesla C1060 GPU card. Our work has therefore solved a major problem in developing ultra-fast (re-)planning technologies for online ART.

  3. Development of a prototype scintillator-based portable γ-ray imager with coded aperture for intraoperative applications

    NASA Astrophysics Data System (ADS)

    Shimazoe, K.; Horiki, K.; Takahashi, H.

    2015-06-01

    In surgical treatment, the intraoperative detection of tissues is becoming important in order to localize malignant functionality. Visual inspection, which is mostly used, results in lower contrast whereas dual inspection with radio and optical sensors is more promising for accurate detection. A scintillator-based portable gamma-ray imager with a coded aperture has been designed and fabricated for exploring the use of coded-aperture imaging in intraoperative applications. The gamma-ray detector is composed of a 12×12 array of 2×2×10 mm3 Ce:GAGG (Ce doped Gd3Al2Ga3O12) crystals individually coupled to a 12×12 avalanche photodiode (APD) array. The APDs are individually read out using a custom-designed readout system using a time-over-threshold application-specific integrated circuit and field-programmable gate array. The coded aperture consists of M-array based holes of 0.5 mm size on a 0.3-mm thickness tungsten collimator. The imaging performance in the x, y, and z directions is measured and characterized for 122-keV gamma rays.

  4. A CLOSE COMPANION SEARCH AROUND L DWARFS USING APERTURE MASKING INTERFEROMETRY AND PALOMAR LASER GUIDE STAR ADAPTIVE OPTICS

    SciTech Connect

    Bernat, David; Bouchez, Antonin H.; Cromer, John L.; Dekany, Richard G.; Moore, Anna M.; Ireland, Michael; Tuthill, Peter; Martinache, Frantz; Angione, John; Burruss, Rick S.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; Kibblewhite, Edward; McKenna, Daniel L.; Petrie, Harold L.; Roberts, Jennifer; Shelton, J. Chris; Thicksten, Robert P.; Trinh, Thang

    2010-06-01

    We present a close companion search around 16 known early L dwarfs using aperture masking interferometry with Palomar laser guide star adaptive optics (LGS AO). The use of aperture masking allows the detection of close binaries, corresponding to projected physical separations of 0.6-10.0 AU for the targets of our survey. This survey achieved median contrast limits of {Delta}K {approx} 2.3 for separations between 1.2 {lambda}/D-4{lambda}/D and {Delta}K {approx} 1.4 at 2/3 {lambda}/D. We present four candidate binaries detected with moderate-to-high confidence (90%-98%). Two have projected physical separations less than 1.5 AU. This may indicate that tight-separation binaries contribute more significantly to the binary fraction than currently assumed, consistent with spectroscopic and photometric overluminosity studies. Ten targets of this survey have previously been observed with the Hubble Space Telescope as part of companion searches. We use the increased resolution of aperture masking to search for close or dim companions that would be obscured by full aperture imaging, finding two candidate binaries. This survey is the first application of aperture masking with LGS AO at Palomar. Several new techniques for the analysis of aperture masking data in the low signal-to-noise regime are explored.

  5. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  6. JPEG 2000 coding of image data over adaptive refinement grids

    NASA Astrophysics Data System (ADS)

    Gamito, Manuel N.; Dias, Miguel S.

    2003-06-01

    An extension of the JPEG 2000 standard is presented for non-conventional images resulting from an adaptive subdivision process. Samples, generated through adaptive subdivision, can have different sizes, depending on the amount of subdivision that was locally introduced in each region of the image. The subdivision principle allows each individual sample to be recursively subdivided into sets of four progressively smaller samples. Image datasets generated through adaptive subdivision find application in Computational Physics where simulations of natural processes are often performed over adaptive grids. It is also found that compression gains can be achieved for non-natural imagery, like text or graphics, if they first undergo an adaptive subdivision process. The representation of adaptive subdivision images is performed by first coding the subdivision structure into the JPEG 2000 bitstream, ina lossless manner, followed by the entropy coded and quantized transform coefficients. Due to the irregular distribution of sample sizes across the image, the wavelet transform must be applied on irregular image subsets that are nested across all the resolution levels. Using the conventional JPEG 2000 coding standard, adaptive subdivision images would first have to be upsampled to the smallest sample size in order to attain a uniform resolution. The proposed method for coding adaptive subdivision images is shown to perform better than conventional JPEG 2000 for medium to high bitrates.

  7. Adaptive Coding and Modulation Scheme for Ka Band Space Communications

    NASA Astrophysics Data System (ADS)

    Lee, Jaeyoon; Yoon, Dongweon; Lee, Wooju

    2010-06-01

    Rain attenuation can cause a serious problem that an availability of space communication link on Ka band becomes low. To reduce the effect of rain attenuation on the error performance of space communications in Ka band, an adaptive coding and modulation (ACM) scheme is required. In this paper, to achieve a reliable telemetry data transmission, we propose an adaptive coding and modulation level using turbo code recommended by the consultative committee for space data systems (CCSDS) and various modulation methods (QPSK, 8PSK, 4+12 APSK, and 4+12+16 APSK) adopted in the digital video broadcasting-satellite2 (DVB-S2).

  8. Exo-planet Direct Imaging with On-Axis and/or Segmented Apertures in Space: Adaptive Compensation of Aperture Discontinuities

    NASA Astrophysics Data System (ADS)

    Soummer, Remi

    Capitalizing on a recent breakthrough in wavefront control theory for obscured apertures made by our group, we propose to demonstrate a method to achieve high contrast exoplanet imaging with on-axis obscured apertures. Our new algorithm, which we named Adaptive Compensation of Aperture Discontinuities (ACAD), provides the ability to compensate for aperture discontinuities (segment gaps and/or secondary mirror supports) by controlling deformable mirrors in a nonlinear wavefront control regime not utilized before but conceptually similar to the beam reshaping used in PIAA coronagraphy. We propose here an in-air demonstration at 1E- 7 contrast, enabled by adding a second deformable mirror to our current test-bed. This expansion of the scope of our current efforts in exoplanet imaging technologies will enabling us to demonstrate an integrated solution for wavefront control and starlight suppression on complex aperture geometries. It is directly applicable at scales from moderate-cost exoplanet probe missions to the 2.4 m AFTA telescopes to future flagship UVOIR observatories with apertures potentially 16-20 m. Searching for nearby habitable worlds with direct imaging is one of the top scientific priorities established by the Astro2010 Decadal Survey. Achieving this ambitious goal will require 1e-10 contrast on a telescope large enough to provide angular resolution and sensitivity to planets around a significant sample of nearby stars. Such a mission must of course also be realized at an achievable cost. Lightweight segmented mirror technology allows larger diameter optics to fit in any given launch vehicle as compared to monolithic mirrors, and lowers total life-cycle costs from construction through integration & test, making it a compelling option for future large space telescopes. At smaller scales, on-axis designs with secondary obscurations and supports are less challenging to fabricate and thus more affordable than the off-axis unobscured primary mirror designs

  9. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  10. BAT Slew Survey (BATSS): Slew Data Analysis for the Swift-BAT Coded Aperture Imaging Telescope

    NASA Astrophysics Data System (ADS)

    Copete, Antonio Julio

    The BAT Slew Survey (BATSS) is the first wide-field survey of the hard X-ray sky (15--150 keV) with a slewing coded aperture imaging telescope. Its fine time resolution, high sensitivity and large sky coverage make it particularly well-suited for detections of transient sources with variability timescales in the ˜1 sec--1 hour range, such as Gamma-Ray Bursts (GRBs), flaring stars and Blazars. As implemented, BATSS observations are found to be consistently more sensitive than their BAT pointing-mode counterparts, by an average of 20% over the 10 sec--3 ksec exposure range, due to intrinsic systematic differences between them. The survey's motivation, development and implementation are presented, including a description of the software and hardware infrastructure that made this effort possible. The analysis of BATSS science data concentrates on the results of the 4.8-year BATSS GRB survey, beginning with the discovery of GRB 070326 during its preliminary testing phase. A total of nineteen (19) GRBs were detected exclusively in BATSS slews over this period, making it the largest contribution to the Swift GRB catalog from all ground-based analysis. The timing and spectral properties of prompt emission from BATSS GRBs reveal their consistency with Swift long GRBs (L-GRBs), though with instances of GRBs with unusually soft spectra or X-Ray Flashes (XRFs), GRBs near the faint end of the fluence distribution accessible to Swift -BAT, and a probable short GRB with extended emission, all uncommon traits within the general Swift GRB population. In addition, the BATSS overall detection rate of 0.49 GRBs/day of instrument time is a significant increase (45%) above the BAT pointing detection rate. This result was confirmed by a GRB detection simulation model, which further showed the increased sky coverage of slews to be the dominant effect in enhancing GRB detection probabilities. A review of lessons learned is included, with specific proposals to broaden both the number and

  11. CAMERA: a compact, automated, laser adaptive optics system for small aperture telescopes

    NASA Astrophysics Data System (ADS)

    Britton, Matthew; Velur, Viswa; Law, Nick; Choi, Philip; Penprase, Bryan E.

    2008-07-01

    CAMERA is an autonomous laser guide star adaptive optics system designed for small aperture telescopes. This system is intended to be mounted permanently on such a telescope to provide large amounts of flexibly scheduled observing time, delivering high angular resolution imagery in the visible and near infrared. The design employs a Shack Hartmann wavefront sensor, a 12x12 actuator MEMS device for high order wavefront compensation, and a solid state 355nm ND:YAG laser to generate a guide star. Commercial CCD and InGaAs detectors provide coverage in the visible and near infrared. CAMERA operates by selecting targets from a queue populated by users and executing these observations autonomously. This robotic system is targeted towards applications that are diffcult to address using classical observing strategies: surveys of very large target lists, recurrently scheduled observations, and rapid response followup of transient objects. This system has been designed and costed, and a lab testbed has been developed to evaluate key components and validate autonomous operations.

  12. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  13. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  14. High-Frame-Rate Synthetic Aperture Ultrasound Imaging Using Mismatched Coded Excitation Waveform Engineering: A Feasibility Study.

    PubMed

    Lashkari, Bahman; Zhang, Kaicheng; Mandelis, Andreas

    2016-06-01

    Mismatched coded excitation (CE) can be employed to increase the frame rate of synthetic aperture ultrasound imaging. The high autocorrelation and low cross correlation (CC) of transmitted signals enables the identification and separation of signal sources at the receiver. Thus, the method provides B-mode imaging with simultaneous transmission from several elements and capability of spatial decoding of the transmitted signals, which makes the imaging process equivalent to consecutive transmissions. Each transmission generates its own image and the combination of all the images results in an image with a high lateral resolution. In this paper, we introduce two different methods for generating multiple mismatched CEs with an identical frequency bandwidth and code length. Therefore, the proposed families of mismatched CEs are able to generate similar resolutions and signal-to-noise ratios. The application of these methods is demonstrated experimentally. Furthermore, several techniques are suggested that can be used to reduce the CC between the mismatched codes. PMID:27101603

  15. Adaptive error correction codes for face identification

    NASA Astrophysics Data System (ADS)

    Hussein, Wafaa R.; Sellahewa, Harin; Jassim, Sabah A.

    2012-06-01

    Face recognition in uncontrolled environments is greatly affected by fuzziness of face feature vectors as a result of extreme variation in recording conditions (e.g. illumination, poses or expressions) in different sessions. Many techniques have been developed to deal with these variations, resulting in improved performances. This paper aims to model template fuzziness as errors and investigate the use of error detection/correction techniques for face recognition in uncontrolled environments. Error correction codes (ECC) have recently been used for biometric key generation but not on biometric templates. We have investigated error patterns in binary face feature vectors extracted from different image windows of differing sizes and for different recording conditions. By estimating statistical parameters for the intra-class and inter-class distributions of Hamming distances in each window, we encode with appropriate ECC's. The proposed approached is tested for binarised wavelet templates using two face databases: Extended Yale-B and Yale. We shall demonstrate that using different combinations of BCH-based ECC's for different blocks and different recording conditions leads to in different accuracy rates, and that using ECC's results in significantly improved recognition results.

  16. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  17. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  18. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  19. Validation of coded aperture coherent scatter spectral imaging for normal and neoplastic breast tissues via surgical pathology

    NASA Astrophysics Data System (ADS)

    Morris, R. E.; Albanese, K. E.; Lakshmanan, M. N.; McCall, S. J.; Greenberg, J. A.; Kapadia, A. J.

    2016-03-01

    This study intends to validate the sensitivity and specificity of coded aperture coherent scatter spectral imaging (CACSSI) by comparison to standard histological preparation and pathologic analysis methods used to differentiate normal and neoplastic breast tissues. A composite overlay of the CACSSI rendered image and pathologist interpreted stained sections validate the ability of CACSSI to differentiate normal and neoplastic breast structures ex-vivo. Via comparison to pathologist annotated slides, the CACSSI system may be further optimized to maximize sensitivity and specificity for differentiation of breast carcinomas.

  20. Single-step, quantitative x-ray differential phase contrast imaging using spectral detection in a coded aperture setup

    NASA Astrophysics Data System (ADS)

    Das, Mini; Liang, Zhihua

    2015-03-01

    In this abstract we describe the first non-interferometric x-ray phase contrast imaging (PCI) method that uses only a single-measurement step to retrieve with quantitative accuracy absorption, phase and differential phase. Our approach is based on utilizing spectral information from photon counting spectral detectors in conjunction with a coded aperture PCI setting to simplify the x-ray "phase problem" to a one-step method. The method by virtue of being single-step with no motion of any component for a given projection image has significantly high potential to overcome the barriers currently faced by PCI.

  1. A trellis-searched APC (adaptive predictive coding) speech coder

    SciTech Connect

    Malone, K.T. ); Fischer, T.R. . Dept. of Electrical and Computer Engineering)

    1990-01-01

    In this paper we formulate a speech coding system that incorporates trellis coded vector quantization (TCVQ) and adaptive predictive coding (APC). A method for optimizing'' the TCVQ codebooks is presented and experimental results concerning survivor path mergings are reported. Simulation results are given for encoding rates of 16 and 9.6 kbps for a variety of coder parameters. The quality of the encoded speech is deemed excellent at an encoding rate of 16 kbps and very good at 9.6 kbps. 13 refs., 2 figs., 4 tabs.

  2. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  3. Adaptive Prediction Error Coding in the Human Midbrain and Striatum Facilitates Behavioral Adaptation and Learning Efficiency.

    PubMed

    Diederen, Kelly M J; Spencer, Tom; Vestergaard, Martin D; Fletcher, Paul C; Schultz, Wolfram

    2016-06-01

    Effective error-driven learning benefits from scaling of prediction errors to reward variability. Such behavioral adaptation may be facilitated by neurons coding prediction errors relative to the standard deviation (SD) of reward distributions. To investigate this hypothesis, we required participants to predict the magnitude of upcoming reward drawn from distributions with different SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. In line with the notion of adaptive coding, BOLD response slopes in the Substantia Nigra/Ventral Tegmental Area (SN/VTA) and ventral striatum were steeper for prediction errors occurring in distributions with smaller SDs. SN/VTA adaptation was not instantaneous but developed across trials. Adaptive prediction error coding was paralleled by behavioral adaptation, as reflected by SD-dependent changes in learning rate. Crucially, increased SN/VTA and ventral striatal adaptation was related to improved task performance. These results suggest that adaptive coding facilitates behavioral adaptation and supports efficient learning. PMID:27181060

  4. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  5. Adaptation improves neural coding efficiency despite increasing correlations in variability.

    PubMed

    Adibi, Mehdi; McDonald, James S; Clifford, Colin W G; Arabzadeh, Ehsan

    2013-01-30

    Exposure of cortical cells to sustained sensory stimuli results in changes in the neuronal response function. This phenomenon, known as adaptation, is a common feature across sensory modalities. Here, we quantified the functional effect of adaptation on the ensemble activity of cortical neurons in the rat whisker-barrel system. A multishank array of electrodes was used to allow simultaneous sampling of neuronal activity. We characterized the response of neurons to sinusoidal whisker vibrations of varying amplitude in three states of adaptation. The adaptors produced a systematic rightward shift in the neuronal response function. Consistently, mutual information revealed that peak discrimination performance was not aligned to the adaptor but to test amplitudes 3-9 μm higher. Stimulus presentation reduced single neuron trial-to-trial response variability (captured by Fano factor) and correlations in the population response variability (noise correlation). We found that these two types of variability were inversely proportional to the average firing rate regardless of the adaptation state. Adaptation transferred the neuronal operating regime to lower rates with higher Fano factor and noise correlations. Noise correlations were positive and in the direction of signal, and thus detrimental to coding efficiency. Interestingly, across all population sizes, the net effect of adaptation was to increase the total information despite increasing the noise correlation between neurons. PMID:23365247

  6. Adaptive norm-based coding of facial identity.

    PubMed

    Rhodes, Gillian; Jeffery, Linda

    2006-09-01

    Identification of a face is facilitated by adapting to its computationally opposite identity, suggesting that the average face functions as a norm for coding identity [Leopold, D. A., O'Toole, A. J., Vetter, T., & Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nature Neuroscience, 4, 89-94; Leopold, D. A., Rhodes, G., Müller, K. -M., & Jeffery, L. (2005). The dynamics of visual adaptation to faces. Proceedings of the Royal Society of London, Series B, 272, 897-904]. Crucially, this interpretation requires that the aftereffect is selective for the opposite identity, but this has not been convincingly demonstrated. We demonstrate such selectivity, observing a larger aftereffect for opposite than non-opposite adapt-test pairs that are matched on perceptual contrast (dissimilarity). Component identities were also harder to detect in morphs of opposite than non-opposite face pairs. We propose an adaptive norm-based coding model of face identity. PMID:16647736

  7. A Mechanically-Cooled, Highly-Portable, HPGe-Based, Coded-Aperture Gamma-Ray Imager

    SciTech Connect

    Ziock, Klaus-Peter; Boehnen, Chris Bensing; Hayward, Jason P; Raffo-Caiado, Ana Claudia

    2010-01-01

    Coded-aperture gamma-ray imaging is a mature technology that is capable of providing accurate and quantitative images of nuclear materials. Although it is potentially of high value to the safeguards and arms-control communities, it has yet to be fully embraced by those communities. One reason for this is the limited choice, high-cost, and low efficiency of commercial instruments; while instruments made by research organizations are frequently large and / or unsuitable for field work. In this paper we present the results of a project that mates the coded-aperture imaging approach with the latest in commercially-available, position-sensitive, High Purity Germanium (HPGe) detec-tors. The instrument replaces a laboratory prototype that, was unsuitable for other than demonstra-tions. The original instrument, and the cart on which it is mounted to provide mobility and pointing capabilities, has a footprint of ~ 2/3 m x 2 m, weighs ~ 100 Kg, and requires cryogen refills every few days. In contrast, the new instrument is tripod mounted, weighs of order 25 Kg, operates with a laptop computer, and is mechanically cooled. The instrument is being used in a program that is ex-ploring the use of combined radiation and laser scanner imaging. The former provides information on the presence, location, and type of nuclear materials while the latter provides design verification information. To align the gamma-ray images with the laser scanner data, the Ge imager is fitted and aligned to a visible-light stereo imaging unit. This unit generates a locus of 3D points that can be matched to the precise laser scanner data. With this approach, the two instruments can be used completely independently at a facility, and yet the data can be accurately overlaid based on the very structures that are being measured.

  8. Higher-frame-rate ultrasound imaging with reduced cross-talk by combining a synthetic aperture and spatial coded excitation

    NASA Astrophysics Data System (ADS)

    Ishihara, Chizue; Ikeda, Teiichiro; Masuzawa, Hiroshi

    2016-04-01

    In recent clinical practice of ultrasound imaging, the importance of high-frame-rate imaging is growing. Simultaneous multiple transmission is one way to increase frame rate while maintaining a spatial resolution and signal-to-noise ratio. However, this technique has an inherent issue in that "cross-talk artifacts" appear between the multiple transmitted pulses. In this study, a novel method providing higher-frame-rate ultrasound imaging with reduced cross-talk by combining a synthetic aperture and spatial coded excitation is proposed. In the proposed method, two coded transmission beams are simultaneously excited during beam steering in the lateral direction. Parallel receive beamforming is then performed in the region around individual transmission beams. Decoding is carried out by using two beamformed signals from a region where laterally neighboring transmission beams overlap. All decoded beamformed signals are then synthesized coherently. The proposed method was evaluated using a simulated phantom image under the assumption of imaging with a general sector probe. Results showed that the method achieved twice the frame rate while maintaining image resolution (105%) and reducing cross-talk artifacts from -37 dB to less than -57 dB.

  9. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  10. Adaptive rezoner in a two-dimensional Lagrangian hydrodynamic code

    SciTech Connect

    Pyun, J.J.; Saltzman, J.S.; Scannapieco, A.J.; Carroll, D.

    1985-01-01

    In an effort to increase spatial resolution without adding additional meshes, an adaptive mesh was incorporated into a two-dimensional Lagrangian hydrodynamics code along with two-dimensional flux corrected (FCT) remapper. The adaptive mesh automatically generates a mesh based on smoothness and orthogonality, and at the same time also tracks physical conditions of interest by focusing mesh points in regions that exhibit those conditions; this is done by defining a weighting function associated with the physical conditions to be tracked. The FCT remapper calculates the net transportive fluxes based on a weighted average of two fluxes computed by a low-order scheme and a high-order scheme. This averaging procedure produces solutions which are conservative and nondiffusive, and maintains positivity. 10 refs., 12 figs.

  11. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  12. Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination.

    PubMed

    Thomas, Blake T; Blalock, Davis W; Levy, William B

    2015-07-01

    Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744

  13. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  14. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  15. Coded aperture imaging of fusion source in a plasma focus operated with pure D{sub 2} and a D{sub 2}-Kr gas admixture

    SciTech Connect

    Springham, S. V.; Talebitaher, A.; Shutler, P. M. E.; Rawat, R. S.; Lee, P.; Lee, S.

    2012-09-10

    The coded aperture imaging (CAI) technique has been used to investigate the spatial distribution of DD fusion in a 1.6 kJ plasma focus (PF) device operated in, alternatively, pure deuterium or deuterium-krypton admixture. The coded mask pattern is based on a singer cyclic difference set with 25% open fraction and positioned close to 90 Degree-Sign to the plasma focus axis, with CR-39 detectors used to register tracks of protons from the D(d, p)T reaction. Comparing the coded aperture imaging proton images for pure D{sub 2} and D{sub 2}-Kr admixture operation reveals clear differences in size, density, and shape between the fusion sources for these two cases.

  16. A scalable multi-chip architecture to realise large-format microshutter arrays for coded aperture applications

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; King, David O.; Smith, Gilbert W.; Stone, Steven M.; Brown, Alan G.; Gordon, Neil T.; Slinger, Christopher W.; Cannon, Kevin; Riches, Stephen; Rogers, Stanley

    2009-08-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations used a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. Recently applications have emerged in the visible and infra red bands for low cost lens-less imaging systems and system studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. Previously we reported on the realization of a 2x2cm single chip mask in the mid-IR based on polysilicon micro-opto-electro-mechanical systems (MOEMS) technology and its integration with ASIC drive electronics using conventional wire bonding. The MOEMS architecture employs interference effects to modulate incident light - achieved by tuning a large array of asymmetric Fabry-Perot optical cavities via an applied voltage and uses a hysteretic row/column scheme for addressing. In this paper we present the latest transmission results in the mid-IR band (3-5μm) and report on progress in developing a scalable architecture based on a tiled approach using multiple 2 x 2cm MOEMS chips with associated control ASICs integrated using flip chip technology. Initial work has focused on a 2 x 2 tiled array as a stepping stone towards an 8 x 8 array.

  17. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  18. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  19. Simulating ion beam extraction from a single aperture triode acceleration column: A comparison of the beam transport codes IGUN and PBGUNS with test stand data

    SciTech Connect

    Patel, A.; Wills, J. S. C.; Diamond, W. T.

    2008-04-15

    Ion beam extraction from two different ion sources with single aperture triode extraction columns was simulated with the particle beam transport codes PBGUNS and IGUN. For each ion source, the simulation results are compared to experimental data generated on well-equipped test stands. Both codes reproduced the qualitative behavior of the extracted ion beams to incremental and scaled changes to the extraction electrode geometry observed on the test stands. Numerical values of optimum beam currents and beam emittance generated by the simulations also agree well with test stand data.

  20. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  1. Adaptive lifting scheme with sparse criteria for image coding

    NASA Astrophysics Data System (ADS)

    Kaaniche, Mounir; Pesquet-Popescu, Béatrice; Benazza-Benyahia, Amel; Pesquet, Jean-Christophe

    2012-12-01

    Lifting schemes (LS) were found to be efficient tools for image coding purposes. Since LS-based decompositions depend on the choice of the prediction/update operators, many research efforts have been devoted to the design of adaptive structures. The most commonly used approaches optimize the prediction filters by minimizing the variance of the detail coefficients. In this article, we investigate techniques for optimizing sparsity criteria by focusing on the use of an ℓ 1 criterion instead of an ℓ 2 one. Since the output of a prediction filter may be used as an input for the other prediction filters, we then propose to optimize such a filter by minimizing a weighted ℓ 1 criterion related to the global rate-distortion performance. More specifically, it will be shown that the optimization of the diagonal prediction filter depends on the optimization of the other prediction filters and vice-versa. Related to this fact, we propose to jointly optimize the prediction filters by using an algorithm that alternates between the optimization of the filters and the computation of the weights. Experimental results show the benefits which can be drawn from the proposed optimization of the lifting operators.

  2. Adaptive phase-coded reconstruction for cardiac CT

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su

    2000-04-01

    Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.

  3. KAPAO: A Natural Guide Star Adaptive Optics System for Small Aperture Telescopes

    NASA Astrophysics Data System (ADS)

    Severson, Scott A.; Choi, P. I.; Spjut, E.; Contreras, D. S.; Gilbreth, B. N.; McGonigle, L. P.; Morrison, W. A.; Rudy, A. R.; Xue, A.; Baranec, C.; Riddle, R.

    2012-05-01

    We describe KAPAO, our project to develop and deploy a low-cost, remote-access, natural guide star adaptive optics system for the Pomona College Table Mountain Observatory (TMO) 1-meter telescope. The system will offer simultaneous dual-band, diffraction-limited imaging at visible and near-infrared wavelengths and will deliver an order-of-magnitude improvement in point source sensitivity and angular resolution relative to the current TMO seeing limits. We have adopted off-the-shelf core hardware components to ensure reliability, minimize costs and encourage replication efforts. These components include a MEMS deformable mirror, a Shack-Hartmann wavefront sensor and a piezo-electric tip-tilt mirror. We present: project motivation, goals and milestones; the instrument optical design; the instrument opto-mechanical design and tolerances; and an overview of KAPAO Alpha, our on-the-sky testbed using off-the-shelf optics. Beyond the expanded scientific capabilities enabled by AO-enhanced resolution and sensitivity, the interdisciplinary nature of the instrument development effort provides an exceptional opportunity to train a broad range of undergraduate STEM students in AO technologies and techniques. The breadth of our collaboration, which includes both public (Sonoma State University) and private (Pomona and Harvey Mudd Colleges) undergraduate institutions has enabled us to engage students ranging from physics, astronomy, engineering and computer science in the all stages of this project. This material is based upon work supported by the National Science Foundation under Grant No. 0960343.

  4. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  5. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code

    NASA Astrophysics Data System (ADS)

    Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.; Ippolito, N.

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported.

  6. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  7. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775

  8. Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.

    PubMed

    Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura

    2016-02-01

    Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413

  9. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    SciTech Connect

    D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite

  10. Wavelet based ECG compression with adaptive thresholding and efficient coding.

    PubMed

    Alshamali, A

    2010-01-01

    This paper proposes a new wavelet-based ECG compression technique. It is based on optimized thresholds to determine significant wavelet coefficients and an efficient coding for their positions. Huffman encoding is used to enhance the compression ratio. The proposed technique is tested using several records taken from the MIT-BIH arrhythmia database. Simulation results show that the proposed technique outperforms others obtained by previously published schemes. PMID:20608811

  11. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295

  12. Studies of the chromatic properties and dynamic aperture of the BNL colliding-beam accelerator. [PATRICIA particle tracking code

    SciTech Connect

    Dell, G.F.

    1983-01-01

    The PATRICIA particle tracking program has been used to study chromatic effects in the Brookhaven CBA (Colliding Beam Accelerator). The short term behavior of particles in the CBA has been followed for particle histories of 300 turns. Contributions from magnet multipoles characteristic of superconducting magnets and closed orbit errors have been included in determining the dynamic aperture of the CBA for on and off momentum particles. The width of the third integer stopband produced by the temperature dependence of magnetization induced sextupoles in the CBA cable dipoles is evaluated for helium distribution systems having periodicity of one and six. The stopband width at a tune of 68/3 is naturally zero for the system having a periodicity of six and is approx. 10/sup -4/ for the system having a periodicity of one. Results from theory are compared with results obtained with PATRICIA; the results agree within a factor of slightly more than two.

  13. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  14. Deficits in context-dependent adaptive coding of reward in schizophrenia.

    PubMed

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  15. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  16. Correctable noise of quantum-error-correcting codes under adaptive concatenation

    NASA Astrophysics Data System (ADS)

    Fern, Jesse

    2008-01-01

    We examine the transformation of noise under a quantum-error-correcting code (QECC) concatenated repeatedly with itself, by analyzing the effects of a quantum channel after each level of concatenation using recovery operators that are optimally adapted to use error syndrome information from the previous levels of the code. We use the Shannon entropy of these channels to estimate the thresholds of correctable noise for QECCs and find considerable improvements under this adaptive concatenation. Similar methods could be used to increase quantum-fault-tolerant thresholds.

  17. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  18. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  19. Temporal Aperture Modulation

    NASA Technical Reports Server (NTRS)

    Proctor, R. J.

    1981-01-01

    The two types of modulation techniques useful to X-ray imaging are reviewed. The use of optimum coded temporal aperature modulation is shown, in certain cases, to offer an advantage over a spatial aperture modulator. Example applications of a diffuse anisotropic X-ray background experiment and a wide field of view hard X-ray imager are discussed.

  20. Code division controlled-MAC in wireless sensor network by adaptive binary signature design

    NASA Astrophysics Data System (ADS)

    Wei, Lili; Batalama, Stella N.; Pados, Dimitris A.; Suter, Bruce

    2007-04-01

    We consider the problem of signature waveform design for code division medium-access-control (MAC) of wireless sensor networks (WSN). In contract to conventional randomly chosen orthogonal codes, an adaptive signature design strategy is developed under the maximum pre-detection SINR (signal to interference plus noise ratio) criterion. The proposed algorithm utilizes slowest descent cords of the optimization surface to move toward the optimum solution and exhibits, upon eigenvector decomposition, linear computational complexity with respect to signature length. Numerical and simulation studies demonstrate the performance of the proposed method and offer comparisons with conventional signature code sets.

  1. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  2. Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons.

    PubMed

    Ralston, Bridget N; Flagg, Lucas Q; Faggin, Eric; Birmingham, John T

    2016-06-01

    For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the "history"). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates "history" into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106

  3. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  4. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  5. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  6. Object-adaptive depth compensated inter prediction for depth video coding in 3D video system

    NASA Astrophysics Data System (ADS)

    Kang, Min-Koo; Lee, Jaejoon; Lim, Ilsoon; Ho, Yo-Sung

    2011-01-01

    Nowadays, the 3D video system using the MVD (multi-view video plus depth) data format is being actively studied. The system has many advantages with respect to virtual view synthesis such as an auto-stereoscopic functionality, but compression of huge input data remains a problem. Therefore, efficient 3D data compression is extremely important in the system, and problems of low temporal consistency and viewpoint correlation should be resolved for efficient depth video coding. In this paper, we propose an object-adaptive depth compensated inter prediction method to resolve the problems where object-adaptive mean-depth difference between a current block, to be coded, and a reference block are compensated during inter prediction. In addition, unique properties of depth video are exploited to reduce side information required for signaling decoder to conduct the same process. To evaluate the coding performance, we have implemented the proposed method into MVC (multiview video coding) reference software, JMVC 8.2. Experimental results have demonstrated that our proposed method is especially efficient for depth videos estimated by DERS (depth estimation reference software) discussed in the MPEG 3DV coding group. The coding gain was up to 11.69% bit-saving, and it was even increased when we evaluated it on synthesized views of virtual viewpoints.

  7. Performance of Adaptive Trellis Coded Modulation Applied to MC-CDMA with Bi-orthogonal Keying

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Yamasaki, Shoichiro; Haseyama, Miki

    A Generalized Symbol-rate-increased (GSRI) Pragmatic Adaptive Trellis Coded Modulation (ATCM) is applied to a Multi-carrier CDMA (MC-CDMA) system with bi-orthogonal keying is analyzed. The MC-CDMA considered in this paper is that the input sequence of a bi-orthogonal modulator has code selection bit sequence and sign bit sequence. In [9], an efficient error correction code using Reed-Solomon (RS) code for the code selection bit sequence has been proposed. However, since BPSK is employed for the sign bit modulation, no error correction code is applied to it. In order to realize a high speed wireless system, a multi-level modulation scheme (e.g. MPSK, MQAM, etc.) is desired. In this paper, we investigate the performance of the MC-CDMA with bi-orthogonal keying employing GSRI ATCM. GSRI TC-MPSK can arbitrarily set the bandwidth expansion ratio keeping higher coding gain than the conventional pragmatic TCM scheme. By changing the modulation scheme and the bandwidth expansion ratio (coding rate), this scheme can optimize the performance according to the channel conditions. The performance evaluations by simulations on an AWGN channel and multi-path fading channels are presented. It is shown that the proposed scheme has remarkable throughput performance than that of the conventional scheme.

  8. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  9. A neural mechanism for time-window separation resolves ambiguity of adaptive coding.

    PubMed

    Hildebrandt, K Jannis; Ronacher, Bernhard; Hennig, R Matthias; Benda, Jan

    2015-03-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task--namely, the reliable encoding of the pattern of an acoustic signal-but detrimental for another--the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  10. A 2x2 multi-chip reconfigurable MOEMS mask: a stepping stone to large format microshutter arrays for coded aperture applications

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Brown, Alan G.; King, David O.; Smith, Gilbert W.; Gordon, Neil T.; Riches, Stephen; Rogers, Stanley

    2010-08-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations used a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. Recently applications have emerged in the visible and infra red bands for low cost lens-less imaging systems and system studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. Previously reported work focused on realising a 2x2cm single chip mask in the mid-IR based on polysilicon micro-optoelectro- mechanical systems (MOEMS) technology and its integration with ASIC drive electronics using conventional wire bonding. It employs interference effects to modulate incident light - achieved by tuning a large array of asymmetric Fabry-Perot optical cavities via an applied voltage and uses a hysteretic row/column scheme for addressing. In this paper we report on the latest results in the mid-IR for the single chip reconfigurable MOEMS mask, trials in scaling up to a mask based on a 2x2 multi-chip array and report on progress towards realising a large format mask comprising 44 MOEMS chips. We also explore the potential of such large, transmissive IR spatial light modulator arrays for other applications and in the current and alternative architectures.

  11. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  12. The development and application of the self-adaptive grid code, SAGE

    NASA Astrophysics Data System (ADS)

    Davies, Carol B.

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  13. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  14. Asynchrony adaptation reveals neural population code for audio-visual timing

    PubMed Central

    Roach, Neil W.; Heron, James; Whitaker, David; McGraw, Paul V.

    2011-01-01

    The relative timing of auditory and visual stimuli is a critical cue for determining whether sensory signals relate to a common source and for making inferences about causality. However, the way in which the brain represents temporal relationships remains poorly understood. Recent studies indicate that our perception of multisensory timing is flexible—adaptation to a regular inter-modal delay alters the point at which subsequent stimuli are judged to be simultaneous. Here, we measure the effect of audio-visual asynchrony adaptation on the perception of a wide range of sub-second temporal relationships. We find distinctive patterns of induced biases that are inconsistent with the previous explanations based on changes in perceptual latency. Instead, our results can be well accounted for by a neural population coding model in which: (i) relative audio-visual timing is represented by the distributed activity across a relatively small number of neurons tuned to different delays; (ii) the algorithm for reading out this population code is efficient, but subject to biases owing to under-sampling; and (iii) the effect of adaptation is to modify neuronal response gain. These results suggest that multisensory timing information is represented by a dedicated population code and that shifts in perceived simultaneity following asynchrony adaptation arise from analogous neural processes to well-known perceptual after-effects. PMID:20961905

  15. Rate-adaptive modulation and coding for optical fiber transmission systems

    NASA Astrophysics Data System (ADS)

    Gho, Gwang-Hyun; Kahn, Joseph M.

    2011-01-01

    Rate-adaptive optical transmission techniques adjust information bit rate based on transmission distance and other factors affecting signal quality. These techniques enable increased bit rates over shorter links, while enabling transmission over longer links when regeneration is not available. They are likely to become more important with increasing network traffic and a continuing evolution toward optically switched mesh networks, which make signal quality more variable. We propose a rate-adaptive scheme using variable-rate forward error correction (FEC) codes and variable constellations with a fixed symbol rate, quantifying how achievable bit rates vary with distance. The scheme uses serially concatenated Reed-Solomon codes and an inner repetition code to vary the code rate, combined with singlecarrier polarization-multiplexed M-ary quadrature amplitude modulation (PM-M-QAM) with variable M and digital coherent detection. A rate adaptation algorithm uses the signal-to-noise ratio (SNR) or the FEC decoder input bit-error ratio (BER) estimated by a receiver to determine the FEC code rate and constellation size that maximizes the information bit rate while satisfying a target FEC decoder output BER and an SNR margin, yielding a peak rate of 200 Gbit/s in a nominal 50-GHz channel bandwidth. We simulate single-channel transmission through a long-haul fiber system incorporating numerous optical switches, evaluating the impact of fiber nonlinearity and bandwidth narrowing. With zero SNR margin, we achieve bit rates of 200/100/50 Gbit/s over distances of 650/2000/3000 km. Compared to an ideal coding scheme, the proposed scheme exhibits a performance gap ranging from about 6.4 dB at 650 km to 7.5 dB at 5000 km.

  16. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  17. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  18. Adapting a Navier-Stokes code to the ICL-DAP

    NASA Technical Reports Server (NTRS)

    Grosch, C. E.

    1985-01-01

    The results of an experiment are reported, i.c., to adapt a Navier-Stokes code, originally developed on a serial computer, to concurrent processing on the CL Distributed Array Processor (DAP). The algorithm used in solving the Navier-Stokes equations is briefly described. The architecture of the DAP and DAP FORTRAN are also described. The modifications of the algorithm so as to fit the DAP are given and discussed. Finally, performance results are given and conclusions are drawn.

  19. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  20. CRASH: A Block-adaptive-mesh Code for Radiative Shock Hydrodynamics—Implementation and Verification

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Tóth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.; Fryxell, B.; Drake, R. P.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  1. CRASH: A Block-Adaptive-Mesh Code for Radiative Shock Hydrodynamics

    NASA Astrophysics Data System (ADS)

    van der Holst, B.; Toth, G.; Sokolov, I. V.; Powell, K. G.; Holloway, J. P.; Myra, E. S.; Stout, Q.; Adams, M. L.; Morel, J. E.; Drake, R. P.

    2011-01-01

    We describe the CRASH (Center for Radiative Shock Hydrodynamics) code, a block adaptive mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with the gray or multigroup method and uses a flux limited diffusion approximation to recover the free-streaming limit. The electrons and ions are allowed to have different temperatures and we include a flux limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite volume discretization in either one, two, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator split method is used to solve these equations in three substeps: (1) solve the hydrodynamic equations with shock-capturing schemes, (2) a linear advection of the radiation in frequency-logarithm space, and (3) an implicit solve of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with this new radiation transfer and heat conduction library and equation-of-state and multigroup opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework (SWMF).

  2. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  3. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  4. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  5. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  6. Less can be more: RNA-adapters may enhance coding capacity of replicators.

    PubMed

    de Boer, Folkert K; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to

  7. The PLUTO Code for Adaptive Mesh Computations in Astrophysical Fluid Dynamics

    NASA Astrophysics Data System (ADS)

    Mignone, A.; Zanni, C.; Tzeferacos, P.; van Straalen, B.; Colella, P.; Bodo, G.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  8. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  9. Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators

    PubMed Central

    de Boer, Folkert K.; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in

  10. Adaptive inter color residual prediction for efficient red-green-blue intra coding

    NASA Astrophysics Data System (ADS)

    Jeong, Jinwoo; Choe, Yoonsik; Kim, Yong-Goo

    2011-07-01

    Intra coding of an RGB video is important to many high fidelity multimedia applications because video acquisition is mostly done in RGB space, and the coding of decorrelated color video loses its virtue in high quality ranges. In order to improve the compression performance of an RGB video, this paper proposes an inter color prediction using adaptive weights. For making full use of spatial, as well as inter color correlation of an RGB video, the proposed scheme is based on a residual prediction approach, and thus the incorporated prediction is performed on the transformed frequency components of spatially predicted residual data of each color plane. With the aid of efficient prediction employing frequency domain inter color residual correlation, the proposed scheme achieves up to 24.3% of bitrate reduction, compared to the common mode of H.264/AVC high 4:4:4 intra profile.

  11. AstroBEAR: Adaptive Mesh Refinement Code for Ideal Hydrodynamics & Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2011-04-01

    AstroBEAR is a modular hydrodynamic & magnetohydrodynamic code environment designed for a variety of astrophysical applications. It uses the BEARCLAW package, a multidimensional, Eulerian computational code used to solve hyperbolic systems of equations. AstroBEAR allows adaptive-mesh-refinment (AMR) simulations in 2, 2.5 (i.e., cylindrical), and 3 dimensions, in either cartesian or curvilinear coordinates. Parallel applications are supported through the MPI architecture. AstroBEAR is written in Fortran 90/95 using standard libraries. AstroBEAR supports hydrodynamic (HD) and magnetohydrodynamic (MHD) applications using a variety of spatial and temporal methods. MHD simulations are kept divergence-free via the constrained transport (CT) methods of Balsara & Spicer. Three different equation of state environments are available: ideal gas, gas with differing isentropic γ, and the analytic Thomas-Fermi formulation of A.R. Bell [2]. Current work is being done to develop a more advanced real gas equation of state.

  12. Pilot-Assisted Adaptive Channel Estimation for Coded MC-CDMA with ICI Cancellation

    NASA Astrophysics Data System (ADS)

    Yui, Tatsunori; Tomeba, Hiromichi; Adachi, Fumiyuki

    One of the promising wireless access techniques for the next generation mobile communications systems is multi-carrier code division multiple access (MC-CDMA). MC-CDMA can provide good transmission performance owing to the frequency diversity effect in a severe frequency-selective fading channel. However, the bit error rate (BER) performance of coded MC-CDMA is inferior to that of orthogonal frequency division multiplexing (OFDM) due to the residual inter-code interference (ICI) after frequency-domain equalization (FDE). Recently, we proposed a frequency-domain soft interference cancellation (FDSIC) to reduce the residual ICI and confirmed by computer simulation that the MC-CDMA with FDSIC provides better BER performance than OFDM. However, ideal channel estimation was assumed. In this paper, we propose adaptive decision-feedback channel estimation (ADFCE) and evaluate by computer simulation the average BER and throughput performances of turbo-coded MC-CDMA with FDSIC. We show that even if a practical channel estimation is used, MC-CDMA with FDSIC can still provide better performance than OFDM.

  13. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  14. Effects of Selective Adaptation on Coding Sugar and Salt Tastes in Mixtures

    PubMed Central

    Goyert, Holly F.; Formaker, Bradley K.; Hettinger, Thomas P.

    2012-01-01

    Little is known about coding of taste mixtures in complex dynamic stimulus environments. A protocol developed for odor stimuli was used to test whether rapid selective adaptation extracted sugar and salt component tastes from mixtures as it did component odors. Seventeen human subjects identified taste components of “salt + sugar” mixtures. In 4 sessions, 16 adapt–test stimulus pairs were presented as atomized, 150-μL “taste puffs” to the tongue tip to simulate odor sniffs. Stimuli were NaCl, sucrose, “NaCl + sucrose,” and water. The sugar was 98% identified but the suppressed salt 65% identified in unadapted mixtures of 2 concentrations of NaCl, 0.1 or 0.05 M, and sucrose at 3 times those concentrations, 0.3 or 0.15 M. Rapid selective adaptation decreased identification of sugar and salt preadapted ambient components to 35%, well below the 74% self-adapted level, despite variation in stimulus concentration and adapting time (<5 or >10 s). The 96% identification of sugar and salt extra mixture components was as certain as identification of single compounds. The results revealed that salt–sugar mixture suppression, dependent on relative mixture-component concentration, was mutual. Furthermore, like odors, stronger and recent tastes are emphasized in dynamic experimental conditions replicating natural situations. PMID:22562765

  15. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex

    PubMed Central

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-01-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70–200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys’ behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  16. Spatially adaptive bases in wavelet-based coding of semi-regular meshes

    NASA Astrophysics Data System (ADS)

    Denis, Leon; Florea, Ruxandra; Munteanu, Adrian; Schelkens, Peter

    2010-05-01

    In this paper we present a wavelet-based coding approach for semi-regular meshes, which spatially adapts the employed wavelet basis in the wavelet transformation of the mesh. The spatially-adaptive nature of the transform requires additional information to be stored in the bit-stream in order to allow the reconstruction of the transformed mesh at the decoder side. In order to limit this overhead, the mesh is first segmented into regions of approximately equal size. For each spatial region, a predictor is selected in a rate-distortion optimal manner by using a Lagrangian rate-distortion optimization technique. When compared against the classical wavelet transform employing the butterfly subdivision filter, experiments reveal that the proposed spatially-adaptive wavelet transform significantly decreases the energy of the wavelet coefficients for all subbands. Preliminary results show also that employing the proposed transform for the lowest-resolution subband systematically yields improved compression performance at low-to-medium bit-rates. For the Venus and Rabbit test models the compression improvements add up to 1.47 dB and 0.95 dB, respectively.

  17. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    PubMed

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  18. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718

  19. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  20. Amino acids and our genetic code: a highly adaptive and interacting defense system.

    PubMed

    Verheesen, R H; Schweitzer, C M

    2012-04-01

    Since the discovery of the genetic code, Mendel's heredity theory and Darwin's evolution theory, science believes that adaptations to the environment are processes in which the adaptation of the genes is a matter of probability, in which finally the specie will survive which is evolved by chance. We hypothesize that evolution and the adaptation of the genes is a well-organized fully adaptive system in which there is no rigidity of the genes. The dividing of the genes will take place in line with the environment to be expected, sensed through the mother. The encoding triplets can encode for more than one amino acid depending on the availability of the amino acids and the needed micronutrients. Those nutrients can cause disease but also prevent diseases, even cancer and auto immunity. In fact we hypothesize that auto immunity is an effective process of the organism to clear suboptimal proteins, formed due to amino acid and micronutrient deficiencies. Only when deficiencies sustain, disease will develop, otherwise the autoantibodies will function as all antibodies function, in a protective way. Furthermore, we hypothesize that essential amino acids are less important than nonessential amino acid (NEA). Species developed the ability to produce the nonessential amino acids themselves because they were not provided by food sufficiently. In contrast essential amino acids are widely available, without any evolutionary pressure. Since we can only produce small amounts of NEA and the availability in food can be reasoned to be too low they are still our main concern in amino acid availability. In conclusion, we hypothesize that increasing health will only be possible by improving our natural environment and living circumstances, not by changing the genes, since they are our last line of defense in surviving our environmental changes. PMID:22289341

  1. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  2. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  3. The NASPE/BPEG generic pacemaker code for antibradyarrhythmia and adaptive-rate pacing and antitachyarrhythmia devices.

    PubMed

    Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R

    1987-07-01

    A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363

  4. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  5. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  6. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  7. EMMA: an adaptive mesh refinement cosmological simulation code with radiative transfer

    NASA Astrophysics Data System (ADS)

    Aubert, Dominique; Deparis, Nicolas; Ocvirk, Pierre

    2015-11-01

    EMMA is a cosmological simulation code aimed at investigating the reionization epoch. It handles simultaneously collisionless and gas dynamics, as well as radiative transfer physics using a moment-based description with the M1 approximation. Field quantities are stored and computed on an adaptive three-dimensional mesh and the spatial resolution can be dynamically modified based on physically motivated criteria. Physical processes can be coupled at all spatial and temporal scales. We also introduce a new and optional approximation to handle radiation: the light is transported at the resolution of the non-refined grid and only once the dynamics has been fully updated, whereas thermo-chemical processes are still tracked on the refined elements. Such an approximation reduces the overheads induced by the treatment of radiation physics. A suite of standard tests are presented and passed by EMMA, providing a validation for its future use in studies of the reionization epoch. The code is parallel and is able to use graphics processing units (GPUs) to accelerate hydrodynamics and radiative transfer calculations. Depending on the optimizations and the compilers used to generate the CPU reference, global GPU acceleration factors between ×3.9 and ×16.9 can be obtained. Vectorization and transfer operations currently prevent better GPU performance and we expect that future optimizations and hardware evolution will lead to greater accelerations.

  8. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  9. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    NASA Astrophysics Data System (ADS)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  10. Low Complex Forward Adaptive Loss Compression Algorithm and Its Application in Speech Coding

    NASA Astrophysics Data System (ADS)

    Nikolić, Jelena; Perić, Zoran; Antić, Dragan; Jovanović, Aleksandra; Denić, Dragan

    2011-01-01

    This paper proposes a low complex forward adaptive loss compression algorithm that works on the frame by frame basis. Particularly, the algorithm we propose performs frame by frame analysis of the input speech signal, estimates and quantizes the gain within the frames in order to enable the quantization by the forward adaptive piecewise linear optimal compandor. In comparison to the solution designed according to the G.711 standard, our algorithm provides not only higher level of the average signal to quantization noise ratio, but also performs a reduction of the PCM bit rate for about 1 bits/sample. Moreover, the algorithm we propose completely satisfies the G.712 standard, since it provides overreaching the curve defined by the G.712 standard in the whole of variance range. Accordingly, we can reasonably believe that our algorithm will find its practical implementation in the high quality coding of signals, represented with less than 8 bits/sample, which as well as speech signals follow Laplacian distribution and have the time varying variances.

  11. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  12. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  13. Aperture masking behind AO systems

    NASA Astrophysics Data System (ADS)

    Ireland, Michael J.

    2012-07-01

    Sparse Aperture-Mask Interferometry (SAM or NRM) behind Adaptive Optics (AO) has now come of age, with more than a dozen astronomy papers published from several 5-10m class telescopes around the world. I will describe the reasons behind its success in achieving relatively high contrasts ( 1000:1 at lambda/ D) and repeatable binary astronomy at the diffraction limit, even when used behind laser-guide star adaptive optics. Placed within the context of AO calibration, the information in an image can be split into pupil-plane phase, Fourier amplitude and closure-phase. It is the closure-phase observable, or its generalisation to Kernel phase, that is immune to pupil-plane phase errors at first and second-order and has been the reason for the technique's success. I will outline the limitations of the technique and the prospects for aperture-masking and related techniques in the future.

  14. A Peak Power Reduction Method with Adaptive Inversion of Clustered Parity-Carriers in BCH-Coded OFDM Systems

    NASA Astrophysics Data System (ADS)

    Muta, Osamu; Akaiwa, Yoshihiko

    In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.

  15. Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity

    PubMed Central

    Latinus, Marianne; Belin, Pascal

    2011-01-01

    We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384

  16. Adaptation of the Advanced Spray Combustion Code to Cavitating Flow Problems

    NASA Technical Reports Server (NTRS)

    Liang, Pak-Yan

    1993-01-01

    A very important consideration in turbopump design is the prediction and prevention of cavitation. Thus far conventional CFD codes have not been generally applicable to the treatment of cavitating flows. Taking advantage of its two-phase capability, the Advanced Spray Combustion Code is being modified to handle flows with transient as well as steady-state cavitation bubbles. The volume-of-fluid approach incorporated into the code is extended and augmented with a liquid phase energy equation and a simple evaporation model. The strategy adopted also successfully deals with the cavity closure issue. Simple test cases will be presented and remaining technical challenges will be discussed.

  17. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  18. Bipartite geminivirus host adaptation determined cooperatively by coding and noncoding sequences of the genome.

    PubMed

    Petty, I T; Carter, S C; Morra, M R; Jeffrey, J L; Olivey, H E

    2000-11-25

    Bipartite geminiviruses are small, plant-infecting viruses with genomes composed of circular, single-stranded DNA molecules, designated A and B. Although they are closely related genetically, individual bipartite geminiviruses frequently exhibit host-specific adaptation. Two such viruses are bean golden mosaic virus (BGMV) and tomato golden mosaic virus (TGMV), which are well adapted to common bean (Phaseolus vulgaris) and Nicotiana benthamiana, respectively. In previous studies, partial host adaptation was conferred on BGMV-based or TGMV-based hybrid viruses by separately exchanging open reading frames (ORFs) on DNA A or DNA B. Here we analyzed hybrid viruses in which all of the ORFs on both DNAs were exchanged except for AL1, which encodes a protein with strictly virus-specific activity. These hybrid viruses exhibited partial transfer of host-adapted phenotypes. In contrast, exchange of noncoding regions (NCRs) upstream from the AR1 and BR1 ORFs did not confer any host-specific gain of function on hybrid viruses. However, when the exchangeable ORFs and NCRs from TGMV were combined in a single BGMV-based hybrid virus, complete transfer of TGMV-like adaptation to N. benthamiana was achieved. Interestingly, the reciprocal TGMV-based hybrid virus displayed only partial gain of function in bean. This may be, in part, the result of defective virus-specific interactions between TGMV and BGMV sequences present in the hybrid, although a potential role in adaptation to bean for additional regions of the BGMV genome cannot be ruled out. PMID:11080490

  19. Fine-Granularity Loading Schemes Using Adaptive Reed-Solomon Coding for xDSL-DMT Systems

    NASA Astrophysics Data System (ADS)

    Panigrahi, Saswat; Le-Ngoc, Tho

    2006-12-01

    While most existing loading algorithms for xDSL-DMT systems strive for the optimal energy distribution to maximize their rate, the amounts of bits loaded to subcarriers are constrained to be integers and the associated granularity losses can represent a significant percentage of the achievable data rate, especially in the presence of the peak-power constraint. To recover these losses, we propose a fine-granularity loading scheme using joint optimization of adaptive modulation and flexible coding parameters based on programmable Reed-Solomon (RS) codes and bit-error probability criterion. Illustrative examples of applications to VDSL-DMT systems indicate that the proposed scheme can offer a rate increase of about[InlineEquation not available: see fulltext.] in most cases as compared to various existing integer-bit-loading algorithms. This improvement is in good agreement with the theoretical estimates developed to quantify the granularity loss.

  20. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; Center for Quantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  1. Debuncher Momentum Aperture Measurements

    SciTech Connect

    O'Day, S.

    1991-01-01

    During the November 1990 through January 1991 {bar p} studies period, the momentum aperture of the beam in the debuncher ring was measured. The momentum aperture ({Delta}p/p) was found to be 4.7%. The momentum spread was also measured with beam bunch rotation off. A nearly constant particle population density was observed for particles with {Delta}p/p of less than 4.3%, indicating virtually unobstructed orbits in this region. The population of particles with momenta outside this aperture was found to decrease rapidly. An absolute or 'cut-off' momentum aperture of {Delta}p/p = 5.50% was measured.

  2. Aperture Size Effect on Extracted Negative Ion Current Density

    NASA Astrophysics Data System (ADS)

    de Esch, H. P. L.; Svensson, L.; Riz, D.

    2009-03-01

    This paper discusses experimental results obtained at the 1 MV testbed at CEA Cadarache that appear to show a higher extracted D- current density from small apertures. Plasma grids with different shapes have been installed and tested. All grids had one single aperture. The tests were done in volume operation and in caesium operation. We tested four grids, two with O/14 mm, one with O/11 mm and one with O/8 mm apertures. No aperture size effect was observed in volume operation. In caesiated operation the extracted current density for the O/8 mm aperture appears to be significantly higher (˜50%) than for the O/14 mm aperture. Simulations with a 3D Monte Carlo Trajectory Following Code have shown an aperture size effect of about 20%. Finally, as byproducts of the experiments, data on backstreaming positive ions and the temperature of the plasma grid have been obtained.

  3. DEMOCRITUS: An adaptive particle in cell (PIC) code for object-plasma interactions

    NASA Astrophysics Data System (ADS)

    Lapenta, Giovanni

    2011-06-01

    A new method for the simulation of plasma materials interactions is presented. The method is based on the particle in cell technique for the description of the plasma and on the immersed boundary method for the description of the interactions between materials and plasma particles. A technique to adapt the local number of particles and grid adaptation are used to reduce the truncation error and the noise of the simulations, to increase the accuracy per unit cost. In the present work, the computational method is verified against known results. Finally, the simulation method is applied to a number of specific examples of practical scientific and engineering interest.

  4. TELESCOPES: Astronomers Overcome 'Aperture Envy'.

    PubMed

    Irion, R

    2000-07-01

    Many users of small telescopes are disturbed by the trend of shutting down smaller instruments in order to help fund bigger and bolder ground-based telescopes. Small telescopes can thrive in the shadow of giant new observatories, they say--but only if they are adapted to specialized projects. Telescopes with apertures of 2 meters or less have unique abilities to monitor broad swaths of the sky and stare at the same objects night after night, sometimes for years; various teams are turning small telescopes into robots, creating networks that span the globe and devoting them to survey projects that big telescopes don't have a prayer of tackling. PMID:17832960

  5. Adaptive coding of orofacial and speech actions in motor and somatosensory spaces with and without overt motor behavior.

    PubMed

    Sato, Marc; Vilain, Coriandre; Lamalle, Laurent; Grabski, Krystyna

    2015-02-01

    Studies of speech motor control suggest that articulatory and phonemic goals are defined in multidimensional motor, somatosensory, and auditory spaces. To test whether motor simulation might rely on sensory-motor coding common with those for motor execution, we used a repetition suppression (RS) paradigm while measuring neural activity with sparse sampling fMRI during repeated overt and covert orofacial and speech actions. RS refers to the phenomenon that repeated stimuli or motor acts lead to decreased activity in specific neural populations and are associated with enhanced adaptive learning related to the repeated stimulus attributes. Common suppressed neural responses were observed in motor and posterior parietal regions in the achievement of both repeated overt and covert orofacial and speech actions, including the left premotor cortex and inferior frontal gyrus, the superior parietal cortex and adjacent intraprietal sulcus, and the left IC and the SMA. Interestingly, reduced activity of the auditory cortex was observed during overt but not covert speech production, a finding likely reflecting a motor rather an auditory imagery strategy by the participants. By providing evidence for adaptive changes in premotor and associative somatosensory brain areas, the observed RS suggests online state coding of both orofacial and speech actions in somatosensory and motor spaces with and without motor behavior and sensory feedback. PMID:25203272

  6. Variable-aperture screen

    DOEpatents

    Savage, George M.

    1991-01-01

    Apparatus for separating material into first and second portions according to size including a plurality of shafts, a plurality of spaced disks radiating outwardly from each of the shafts to define apertures and linkage interconnecting the shafts for moving the shafts toward or away from one another to vary the size of the apertures while the apparatus is performing the separating function.

  7. Rotating Aperture System

    DOEpatents

    Rusnak, Brian; Hall, James M.; Shen, Stewart; Wood, Richard L.

    2005-01-18

    A rotating aperture system includes a low-pressure vacuum pumping stage with apertures for passage of a deuterium beam. A stator assembly includes holes for passage of the beam. The rotor assembly includes a shaft connected to a deuterium gas cell or a crossflow venturi that has a single aperture on each side that together align with holes every rotation. The rotating apertures are synchronized with the firing of the deuterium beam such that the beam fires through a clear aperture and passes into the Xe gas beam stop. Portions of the rotor are lapped into the stator to improve the sealing surfaces, to prevent rapid escape of the deuterium gas from the gas cell.

  8. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  9. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  10. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  11. Simplified APC for Space Shuttle applications. [Adaptive Predictive Coding for speech transmission

    NASA Technical Reports Server (NTRS)

    Hutchins, S. E.; Batson, B. H.

    1975-01-01

    This paper describes an 8 kbps adaptive predictive digital speech transmission system which was designed for potential use in the Space Shuttle Program. The system was designed to provide good voice quality in the presence of both cabin noise on board the Shuttle and the anticipated bursty channel. Minimal increase in size, weight, and power over the current high data rate system was also a design objective.

  12. Adaptive Colour Contrast Coding in the Salamander Retina Efficiently Matches Natural Scene Statistics

    PubMed Central

    Vasserman, Genadiy; Schneidman, Elad; Segev, Ronen

    2013-01-01

    The visual system continually adjusts its sensitivity to the statistical properties of the environment through an adaptation process that starts in the retina. Colour perception and processing is commonly thought to occur mainly in high visual areas, and indeed most evidence for chromatic colour contrast adaptation comes from cortical studies. We show that colour contrast adaptation starts in the retina where ganglion cells adjust their responses to the spectral properties of the environment. We demonstrate that the ganglion cells match their responses to red-blue stimulus combinations according to the relative contrast of each of the input channels by rotating their functional response properties in colour space. Using measurements of the chromatic statistics of natural environments, we show that the retina balances inputs from the two (red and blue) stimulated colour channels, as would be expected from theoretical optimal behaviour. Our results suggest that colour is encoded in the retina based on the efficient processing of spectral information that matches spectral combinations in natural scenes on the colour processing level. PMID:24205373

  13. Perceiving Affordances for Fitting through Apertures

    ERIC Educational Resources Information Center

    Ishak, Shaziela; Adolph, Karen E.; Lin, Grace C.

    2008-01-01

    Affordances--possibilities for action--are constrained by the match between actors and their environments. For motor decisions to be adaptive, affordances must be detected accurately. Three experiments examined the correspondence between motor decisions and affordances as participants reached through apertures of varying size. A psychophysical…

  14. A video coding scheme based on joint spatiotemporal and adaptive prediction.

    PubMed

    Jiang, Wenfei; Latecki, Longin Jan; Liu, Wenyu; Liang, Hui; Gorman, Ken

    2009-05-01

    We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed. PMID:19342337

  15. A 2-D orientation-adaptive prediction filter in lifting structures for image coding.

    PubMed

    Gerek, Omer N; Cetin, A Enis

    2006-01-01

    Lifting-style implementations of wavelets are widely used in image coders. A two-dimensional (2-D) edge adaptive lifting structure, which is similar to Daubechies 5/3 wavelet, is presented. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate +/-45 degrees in regions with diagonal gradient. The gradient estimator is computationally inexpensive with additional costs of only six subtractions per lifting instruction, and no multiplications are required. PMID:16435541

  16. Bistatic synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Yates, Gillian

    Synthetic aperture radar (SAR) allows all-weather, day and night, surface surveillance and has the ability to detect, classify and geolocate objects at long stand-off ranges. Bistatic SAR, where the transmitter and the receiver are on separate platforms, is seen as a potential means of countering the vulnerability of conventional monostatic SAR to electronic countermeasures, particularly directional jamming, and avoiding physical attack of the imaging platform. As the receiving platform can be totally passive, it does not advertise its position by RF emissions. The transmitter is not susceptible to jamming and can, for example, operate at long stand-off ranges to reduce its vulnerability to physical attack. This thesis examines some of the complications involved in producing high-resolution bistatic SAR imagery. The effect of bistatic operation on resolution is examined from a theoretical viewpoint and analytical expressions for resolution are developed. These expressions are verified by simulation work using a simple 'point by point' processor. This work is extended to look at using modern practical processing engines for bistatic geometries. Adaptations of the polar format algorithm and range migration algorithm are considered. The principal achievement of this work is a fully airborne demonstration of bistatic SAR. The route taken in reaching this is given, along with some results. The bistatic SAR imagery is analysed and compared to the monostatic imagery collected at the same time. Demonstrating high-resolution bistatic SAR imagery using two airborne platforms represents what I believe to be a European first and is likely to be the first time that this has been achieved outside the US (the UK has very little insight into US work on this topic). Bistatic target characteristics are examined through the use of simulations. This also compares bistatic imagery with monostatic and gives further insight into the utility of bistatic SAR.

  17. Adaptation.

    PubMed

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  18. Sub-Aperture Interferometers

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2010-01-01

    Sub-aperture interferometers -- also called wavefront-split interferometers -- have been developed for simultaneously measuring displacements of multiple targets. The terms "sub-aperture" and "wavefront-split" signify that the original measurement light beam in an interferometer is split into multiple sub-beams derived from non-overlapping portions of the original measurement-beam aperture. Each measurement sub-beam is aimed at a retroreflector mounted on one of the targets. The splitting of the measurement beam is accomplished by use of truncated mirrors and masks, as shown in the example below

  19. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  20. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  1. First Clinical Release of an Online, Adaptive, Aperture-Based Image-Guided Radiotherapy Strategy in Intensity-Modulated Radiotherapy to Correct for Inter- and Intrafractional Rotations of the Prostate

    SciTech Connect

    Deutschmann, Heinz; Kametriser, Gerhard; Steininger, Philipp; Scherer, Philipp; Schoeller, Helmut; Gaisberger, Christoph; Mooslechner, Michaela; Mitterlechner, Bernhard; Weichenberger, Harald; Fastner, Gert; Wurstbauer, Karl; Jeschke, Stephan; Forstner, Rosemarie; Sedlmayer, Felix

    2012-08-01

    Purpose: We developed and evaluated a correction strategy for prostate rotations using direct adaptation of segments in intensity-modulated radiotherapy (IMRT). Method and Materials: Implanted fiducials (four gold markers) were used to determine interfractional translations, rotations, and dilations of the prostate. We used hybrid imaging: The markers were automatically detected in two pretreatment planar X-ray projections; their actual position in three-dimensional space was reconstructed from these images at first. The structure set comprising prostate, seminal vesicles, and adjacent rectum wall was transformed accordingly in 6 degrees of freedom. Shapes of IMRT segments were geometrically adapted in a class solution forward-planning approach, derived within seconds on-site and treated immediately. Intrafractional movements were followed in MV electronic portal images captured on the fly. Results: In 31 of 39 patients, for 833 of 1013 fractions (supine, flat couch, knee support, comfortably full bladder, empty rectum, no intraprostatic marker migrations >2 mm of more than one marker), the online aperture adaptation allowed safe reduction of margins clinical target volume-planning target volume (prostate) down to 5 mm when only interfractional corrections were applied: Dominant L-R rotations were found to be 5.3 Degree-Sign (mean of means), standard deviation of means {+-}4.9 Degree-Sign , maximum at 30.7 Degree-Sign . Three-dimensional vector translations relative to skin markings were 9.3 {+-} 4.4 mm (maximum, 23.6 mm). Intrafractional movements in 7.7 {+-} 1.5 min (maximum, 15.1 min) between kV imaging and last beam's electronic portal images showed further L-R rotations of 2.5 Degree-Sign {+-} 2.3 Degree-Sign (maximum, 26.9 Degree-Sign ), and three-dimensional vector translations of 3.0 {+-}3.7 mm (maximum, 10.2 mm). Addressing intrafractional errors could further reduce margins to 3 mm. Conclusion: We demonstrated the clinical feasibility of an online

  2. An adaptive algorithm for removing the blocking artifacts in block-transform coded images

    NASA Astrophysics Data System (ADS)

    Yang, Jingzhong; Ma, Zheng

    2005-11-01

    JPEG and MPEG compression standards adopt the macro block encoding approach, but this method can lead to annoying blocking effects-the artificial rectangular discontinuities in the decoded images. Many powerful postprocessing algorithms have been developed to remove the blocking effects. However, all but the simplest algorithms can be too complex for real-time applications, such as video decoding. We propose an adaptive and easy-to-implement algorithm that can removes the artificial discontinuities. This algorithm contains two steps, firstly, to perform a fast linear smoothing of the block edge's pixel by average value replacement strategy, the next one, by comparing the variance that is derived from the difference of the processed image with a reasonable threshold, to determine whether the first step should stop or not. Experiments have proved that this algorithm can quickly remove the artificial discontinuities without destroying the key information of the decoded images, it is robust to different images and transform strategy.

  3. Variable-aperture screen

    DOEpatents

    Savage, G.M.

    1991-10-29

    Apparatus is described for separating material into first and second portions according to size including a plurality of shafts, a plurality of spaced disks radiating outwardly from each of the shafts to define apertures and linkage interconnecting the shafts for moving the shafts toward or away from one another to vary the size of the apertures while the apparatus is performing the separating function. 10 figures.

  4. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  5. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding.

    PubMed

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering-CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes-MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  6. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  7. The AdaptiSPECT Imaging Aperture

    PubMed Central

    Chaix, Cécile; Moore, Jared W.; Van Holen, Roel; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    In this paper, we present the imaging aperture of an adaptive SPECT imaging system being developed at the Center for Gamma Ray Imaging (AdaptiSPECT). AdaptiSPECT is designed to automatically change its configuration in response to preliminary data, in order to improve image quality for a particular task. In a traditional pinhole SPECT imaging system, the characteristics (magnification, resolution, field of view) are set by the geometry of the system, and any modification can be accomplished only by manually changing the collimator and the distance of the detector to the center of the field of view. Optimization of the imaging system for a specific task on a specific individual is therefore difficult. In an adaptive SPECT imaging system, on the other hand, the configuration can be conveniently changed under computer control. A key component of an adaptive SPECT system is its aperture. In this paper, we present the design, specifications, and fabrication of the adaptive pinhole aperture that will be used for AdaptiSPECT, as well as the controls that enable autonomous adaptation. PMID:27019577

  8. Motion-vector-based adaptive quantization in MPEG-4 fine granular scalable coding

    NASA Astrophysics Data System (ADS)

    Yang, Shuping; Lin, Xinggang; Wang, Guijin

    2003-05-01

    Selective enhancement mechanism of Fine-Granular-Scalability (FGS) In MPEG-4 is able to enhance specific objects under bandwidth variation. A novel technique for self-adaptive enhancement of interested regions based on Motion Vectors (MVs) of the base layer is proposed, which is suitable for those video sequences having still background and what we are interested in is only the moving objects in the scene, such as news broadcasting, video surveillance, Internet education, etc. Motion vectors generated during base layer encoding are obtained and analyzed. A Gaussian model is introduced to describe non-moving macroblocks which may have non-zero MVs caused by random noise or luminance variation. MVs of these macroblocks are set to zero to prevent them from being enhanced. A segmentation algorithm, region growth, based on MV values is exploited to separate foreground from background. Post-process is needed to reduce the influence of burst noise so that only the interested moving regions are left. Applying the result in selective enhancement during enhancement layer encoding can significantly improves the visual quality of interested regions within an aforementioned video transmitted at different bit-rate in our experiments.

  9. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  10. Differential Synthetic Aperture Ladar

    SciTech Connect

    Stappaerts, E A; Scharlemann, E

    2005-02-07

    We report a differential synthetic aperture ladar (DSAL) concept that relaxes platform and laser requirements compared to conventional SAL. Line-of-sight translation/vibration constraints are reduced by several orders of magnitude, while laser frequency stability is typically relaxed by an order of magnitude. The technique is most advantageous for shorter laser wavelengths, ultraviolet to mid-infrared. Analytical and modeling results, including the effect of speckle and atmospheric turbulence, are presented. Synthetic aperture ladars are of growing interest, and several theoretical and experimental papers have been published on the subject. Compared to RF synthetic aperture radar (SAR), platform/ladar motion and transmitter bandwidth constraints are especially demanding at optical wavelengths. For mid-IR and shorter wavelengths, deviations from a linear trajectory along the synthetic aperture length have to be submicron, or their magnitude must be measured to that precision for compensation. The laser coherence time has to be the synthetic aperture transit time, or transmitter phase has to be recorded and a correction applied on detection.

  11. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  12. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better. PMID:20174010

  13. Implementation of swept synthetic aperture imaging

    NASA Astrophysics Data System (ADS)

    Bottenus, Nick; Jakovljevic, Marko; Boctor, Emad; Trahey, Gregg E.

    2015-03-01

    Ultrasound imaging of deep targets is limited by the resolution of current ultrasound systems based on the available aperture size. We propose a system to synthesize an extended effective aperture in order to improve resolution and target detectability at depth using a precisely-tracked transducer swept across the region of interest. A Field II simulation was performed to demonstrate the swept aperture approach in both the spatial and frequency domains. The adaptively beam-formed system was tested experimentally using a volumetric transducer and an ex vivo canine abdominal layer to evaluate the impact of clutter-generating tissue on the resulting point spread function. Resolution was improved by 73% using a 30.8 degree sweep despite the presence of varying aberration across the array with an amplitude on the order of 100 ns. Slight variations were observed in the magnitude and position of side lobes compared to the control case, but overall image quality was not significantly degraded as compared by a simulation based on the experimental point spread function. We conclude that the swept aperture imaging system may be a valuable tool for synthesizing large effective apertures using conventional ultrasound hardware.

  14. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  15. Inferring the Frequency Spectrum of Derived Variants to Quantify Adaptive Molecular Evolution in Protein-Coding Genes of Drosophila melanogaster.

    PubMed

    Keightley, Peter D; Campos, José L; Booker, Tom R; Charlesworth, Brian

    2016-06-01

    Many approaches for inferring adaptive molecular evolution analyze the unfolded site frequency spectrum (SFS), a vector of counts of sites with different numbers of copies of derived alleles in a sample of alleles from a population. Accurate inference of the high-copy-number elements of the SFS is difficult, however, because of misassignment of alleles as derived vs. ancestral. This is a known problem with parsimony using outgroup species. Here we show that the problem is particularly serious if there is variation in the substitution rate among sites brought about by variation in selective constraint levels. We present a new method for inferring the SFS using one or two outgroups that attempts to overcome the problem of misassignment. We show that two outgroups are required for accurate estimation of the SFS if there is substantial variation in selective constraints, which is expected to be the case for nonsynonymous sites in protein-coding genes. We apply the method to estimate unfolded SFSs for synonymous and nonsynonymous sites in a population of Drosophila melanogaster from phase 2 of the Drosophila Population Genomics Project. We use the unfolded spectra to estimate the frequency and strength of advantageous and deleterious mutations and estimate that ∼50% of amino acid substitutions are positively selected but that <0.5% of new amino acid mutations are beneficial, with a scaled selection strength of Nes ≈ 12. PMID:27098912

  16. Apodizer aperture for lasers

    DOEpatents

    Jorna, Siebe; Siebert, Larry D.; Brueckner, Keith A.

    1976-11-09

    An aperture attenuator for use with high power lasers which includes glass windows shaped and assembled to form an annulus chamber which is filled with a dye solution. The annulus chamber is shaped such that the section in alignment with the axis of the incident beam follows a curve which is represented by the equation y = (r - r.sub.o).sup.n.

  17. Optical sparse aperture imaging.

    PubMed

    Miller, Nicholas J; Dierking, Matthew P; Duncan, Bradley D

    2007-08-10

    The resolution of a conventional diffraction-limited imaging system is proportional to its pupil diameter. A primary goal of sparse aperture imaging is to enhance resolution while minimizing the total light collection area; the latter being desirable, in part, because of the cost of large, monolithic apertures. Performance metrics are defined and used to evaluate several sparse aperture arrays constructed from multiple, identical, circular subapertures. Subaperture piston and/or tilt effects on image quality are also considered. We selected arrays with compact nonredundant autocorrelations first described by Golay. We vary both the number of subapertures and their relative spacings to arrive at an optimized array. We report the results of an experiment in which we synthesized an image from multiple subaperture pupil fields by masking a large lens with a Golay array. For this experiment we imaged a slant edge feature of an ISO12233 resolution target in order to measure the modulation transfer function. We note the contrast reduction inherent in images formed through sparse aperture arrays and demonstrate the use of a Wiener-Helstrom filter to restore contrast in our experimental images. Finally, we describe a method to synthesize images from multiple subaperture focal plane intensity images using a phase retrieval algorithm to obtain estimates of subaperture pupil fields. Experimental results from synthesizing an image of a point object from multiple subaperture images are presented, and weaknesses of the phase retrieval method for this application are discussed. PMID:17694146

  18. Synthetic Aperture Radar Interferometry

    NASA Technical Reports Server (NTRS)

    Rosen, P. A.; Hensley, S.; Joughin, I. R.; Li, F.; Madsen, S. N.; Rodriguez, E.; Goldstein, R. M.

    1998-01-01

    Synthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface, its changes over time, and other changes in the detailed characteristics of the surface. This paper reviews the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering.

  19. Phasing rectangular apertures.

    PubMed

    Baker, K L; Homoelle, D; Utterback, E; Jones, S M

    2009-10-26

    Several techniques have been developed to phase apertures in the context of astronomical telescopes with segmented mirrors. Phasing multiple apertures, however, is important in a wide range of optical applications. The application of primary interest in this paper is the phasing of multiple short pulse laser beams for fast ignition fusion experiments. In this paper analytic expressions are derived for parameters such as the far-field distribution, a line-integrated form of the far-field distribution that could be fit to measured data, enclosed energy or energy-in-a-bucket and center-of-mass that can then be used to phase two rectangular apertures. Experimental data is taken with a MEMS device to simulate the two apertures and comparisons are made between the analytic parameters and those derived from the measurements. Two methods, fitting the measured far-field distribution to the theoretical distribution and measuring the ensquared energy in the far-field, produced overall phase variance between the 100 measurements of less than 0.005 rad(2) or an RMS displacement of less than 12 nm. PMID:19997175

  20. Genetic apertures: an improved sparse aperture design framework.

    PubMed

    Salvaggio, Philip S; Schott, John R; McKeown, Donald M

    2016-04-20

    The majority of optical sparse aperture imaging research in the remote sensing field has been confined to a small set of aperture layouts. While these layouts possess some desirable properties for imaging, they may not be ideal for all applications. This work introduces an optimization framework for sparse aperture layouts based on genetic algorithms as well as a small set of fitness functions for incoherent sparse aperture image quality. The optimization results demonstrate the merits of existing designs and the opportunity for creating new sparse aperture layouts. PMID:27140086

  1. Aperture center energy showcase

    SciTech Connect

    Torres, J. J.

    2012-03-01

    Sandia and Forest City have established a Cooperative Research and Development Agreement (CRADA), and the partnership provides a unique opportunity to take technology research and development from demonstration to application in a sustainable community. A project under that CRADA, Aperture Center Energy Showcase, offers a means to develop exhibits and demonstrations that present feedback to community members, Sandia customers, and visitors. The technologies included in the showcase focus on renewable energy and its efficiency, and resilience. These technologies are generally scalable, and provide secure, efficient solutions to energy production, delivery, and usage. In addition to establishing an Energy Showcase, support offices and conference capabilities that facilitate research, collaboration, and demonstration were created. The Aperture Center project focuses on establishing a location that provides outreach, awareness, and demonstration of research findings, emerging technologies, and project developments to Sandia customers, visitors, and Mesa del Sol community members.

  2. Configurable Aperture Space Telescope

    NASA Technical Reports Server (NTRS)

    Ennico, Kimberly; Bendek, Eduardo

    2015-01-01

    In December 2014, we were awarded Center Innovation Fund to evaluate an optical and mechanical concept for a novel implementation of a segmented telescope based on modular, interconnected small sats (satlets). The concept is called CAST, a Configurable Aperture Space Telescope. With a current TRL is 2 we will aim to reach TLR 3 in Sept 2015 by demonstrating a 2x2 mirror system to validate our optical model and error budget, provide straw man mechanical architecture and structural damping analyses, and derive future satlet-based observatory performance requirements. CAST provides an alternative access to visible and/or UV wavelength space telescope with 1-meter or larger aperture for NASA SMD Astrophysics and Planetary Science community after the retirement of HST

  3. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  4. Synthetic Aperture Radiometer Systems

    NASA Technical Reports Server (NTRS)

    LeVine, David M.

    1999-01-01

    Aperture synthesis is a new technology for passive microwave remote sensing from space which has the potential to overcome the limitations set in the past by antenna size. This is an interferometric technique in which pairs of small antennas and signal processing are used to obtain the resolution of a single large antenna. The technique has been demonstrated successfully at L-band with the aircraft prototype instrument, ESTAR. Proposals have been submitted to demonstrate this technology in space (HYDROSTAR and MIRAS).

  5. Integrated electrochromic aperture diaphragm

    NASA Astrophysics Data System (ADS)

    Deutschmann, T.; Oesterschulze, E.

    2014-05-01

    In the last years, the triumphal march of handheld electronics with integrated cameras has opened amazing fields for small high performing optical systems. For this purpose miniaturized iris apertures are of practical importance because they are essential to control both the dynamic range of the imaging system and the depth of focus. Therefore, we invented a micro optical iris based on an electrochromic (EC) material. This material changes its absorption in response to an applied voltage. A coaxial arrangement of annular rings of the EC material is used to establish an iris aperture without need of any mechanical moving parts. The advantages of this device do not only arise from the space-saving design with a thickness of the device layer of 50μm. But it also benefits from low power consumption. In fact, its transmission state is stable in an open circuit, phrased memory effect. Only changes of the absorption require a voltage of up to 2 V. In contrast to mechanical iris apertures the absorption may be controlled on an analog scale offering the opportunity for apodization. These properties make our device the ideal candidate for battery powered and space-saving systems. We present optical measurements concerning control of the transmitted intensity and depth of focus, and studies dealing with switching times, light scattering, and stability. While the EC polymer used in this study still has limitations concerning color and contrast, the presented device features all functions of an iris aperture. In contrast to conventional devices it offers some special features. Owing to the variable chemistry of the EC material, its spectral response may be adjusted to certain applications like color filtering in different spectral regimes (UV, optical range, infrared). Furthermore, all segments may be switched individually to establish functions like spatial Fourier filtering or lateral tunable intensity filters.

  6. Aperture excited dielectric antennas

    NASA Technical Reports Server (NTRS)

    Crosswell, W. F.; Chatterjee, J. S.; Mason, V. B.; Tai, C. T.

    1974-01-01

    The results of a comprehensive experimental and theoretical study of the effect of placing dielectric objects over the aperture of waveguide antennas are presented. Experimental measurements of the radiation patterns, gain, impedance, near-field amplitude, and pattern and impedance coupling between pairs of antennas are given for various Plexiglas shapes, including the sphere and the cube, excited by rectangular, circular, and square waveguide feed apertures. The waveguide excitation of a dielectric sphere is modeled using the Huygens' source, and expressions for the resulting electric fields, directivity, and efficiency are derived. Calculations using this model show good overall agreement with experimental patterns and directivity measurements. The waveguide under an infinite dielectric slab is used as an impedance model. Calculations using this model agree qualitatively with the measured impedance data. It is concluded that dielectric loaded antennas such as the waveguide excited sphere, cube, or sphere-cylinder can produce directivities in excess of that obtained by a uniformly illuminated aperture of the same cross section, particularly for dielectric objects with dimensions of 2 wavelengths or less. It is also shown that for certain configurations coupling between two antennas of this type is less than that for the same antennas without dielectric loading.

  7. Dynamic metamaterial aperture for microwave imaging

    NASA Astrophysics Data System (ADS)

    Sleasman, Timothy; F. Imani, Mohammadreza; Gollub, Jonah N.; Smith, David R.

    2015-11-01

    We present a dynamic metamaterial aperture for use in computational imaging schemes at microwave frequencies. The aperture consists of an array of complementary, resonant metamaterial elements patterned into the upper conductor of a microstrip line. Each metamaterial element contains two diodes connected to an external control circuit such that the resonance of the metamaterial element can be damped by application of a bias voltage. Through applying different voltages to the control circuit, select subsets of the elements can be switched on to create unique radiation patterns that illuminate the scene. Spatial information of an imaging domain can thus be encoded onto this set of radiation patterns, or measurements, which can be processed to reconstruct the targets in the scene using compressive sensing algorithms. We discuss the design and operation of a metamaterial imaging system and demonstrate reconstructed images with a 10:1 compression ratio. Dynamic metamaterial apertures can potentially be of benefit in microwave or millimeter wave systems such as those used in security screening and through-wall imaging. In addition, feature-specific or adaptive imaging can be facilitated through the use of the dynamic aperture.

  8. Dynamic metamaterial aperture for microwave imaging

    SciTech Connect

    Sleasman, Timothy; Imani, Mohammadreza F.; Gollub, Jonah N.; Smith, David R.

    2015-11-16

    We present a dynamic metamaterial aperture for use in computational imaging schemes at microwave frequencies. The aperture consists of an array of complementary, resonant metamaterial elements patterned into the upper conductor of a microstrip line. Each metamaterial element contains two diodes connected to an external control circuit such that the resonance of the metamaterial element can be damped by application of a bias voltage. Through applying different voltages to the control circuit, select subsets of the elements can be switched on to create unique radiation patterns that illuminate the scene. Spatial information of an imaging domain can thus be encoded onto this set of radiation patterns, or measurements, which can be processed to reconstruct the targets in the scene using compressive sensing algorithms. We discuss the design and operation of a metamaterial imaging system and demonstrate reconstructed images with a 10:1 compression ratio. Dynamic metamaterial apertures can potentially be of benefit in microwave or millimeter wave systems such as those used in security screening and through-wall imaging. In addition, feature-specific or adaptive imaging can be facilitated through the use of the dynamic aperture.

  9. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  10. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  11. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  12. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  13. Differential Optical Synthetic Aperture Radar

    DOEpatents

    Stappaerts, Eddy A.

    2005-04-12

    A new differential technique for forming optical images using a synthetic aperture is introduced. This differential technique utilizes a single aperture to obtain unique (N) phases that can be processed to produce a synthetic aperture image at points along a trajectory. This is accomplished by dividing the aperture into two equal "subapertures", each having a width that is less than the actual aperture, along the direction of flight. As the platform flies along a given trajectory, a source illuminates objects and the two subapertures are configured to collect return signals. The techniques of the invention is designed to cancel common-mode errors, trajectory deviations from a straight line, and laser phase noise to provide the set of resultant (N) phases that can produce an image having a spatial resolution corresponding to a synthetic aperture.

  14. New Aperture Partitioning Element

    NASA Astrophysics Data System (ADS)

    Griffin, S.; Calef, B.; Williams, S.

    Postprocessing in an optical system can be aided by adding an optical element to partition the pupil into a number of segments. When imaging through the atmosphere, the recorded data are blurred by temperature-induced variations in the index of refraction along the line of sight. Using speckle imaging techniques developed in the astronomy community, this blurring can be corrected to some degree. The effectiveness of these techniques is diminished by redundant baselines in the pupil. Partitioning the pupil reduces the degree of baseline redundancy, and therefore improves the quality of images that can be obtained from the system. It is possible to implement the described approach on an optical system with a segmented primary mirror, but not very practical. This is because most optical systems do not have segmented primary mirrors, and those that do have relatively low bandwidth positioning of segments due to their large mass and inertia. It is much more practical to position an active aperture partitioning element at an aft optics pupil of the optical system. This paper describes the design, implementation and testing of a new aperture partitioning element that is completely reflective and reconfigurable. The device uses four independent, annular segments that can be positioned with a high degree of accuracy without impacting optical wavefront of each segment. This mirror has been produced and is currently deployed and working on the 3.6 m telescope.

  15. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  16. Interferometric Synthetic Aperture Microscopy (ISAM)

    NASA Astrophysics Data System (ADS)

    Adie, Steven G.; Shemonski, Nathan D.; Ralston, Tyler S.; Carney, P. Scott; Boppart, Stephen A.

    The trade-off between transverse resolution and depth-of-field, and the mitigation of optical aberrations, are long-standing problems in optical imaging. The deleterious impact of these problems on three-dimensional tomography increases with numerical aperture (NA), and so they represent a significant impediment for real-time cellular resolution tomography over the typical imaging depths achieved with OCT. With optical coherence microscopy (OCM), which utilizes higher-NA optics than OCT, the depth-of-field is severely reduced, and it has been postulated that aberrations play a major role in reducing the useful imaging depth in OCM. Even at lower transverse resolution, both these phenomena produce artifacts that degrade the imaging of fine tissue structures. Early approaches to the limited depth-of-field problem in time-domain OCT utilized dynamic focusing. In spectral-domain OCT, this focus-shifting approach to data acquisition leads to long acquisition times and large datasets. Adaptive optics (AO) has been utilized to correct optical aberrations, in particular for retinal OCT, but in addition to requiring elaborate and expensive setups, the real-time optimization requirements at the time of imaging, and the correction of spatially varying effects of aberrations throughout an imaged volume, remain as significant challenges. This chapter presents computed imaging solutions for the reconstruction of sample structure when imaging with ideal and aberrated Gaussian beams.

  17. Low-Cost Large Aperture Telescopes for Optical Communications

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid

    2006-01-01

    Low-cost, 0.5-1 meter ground apertures are required for near-Earth laser communications. Low-cost ground apertures with equivalent diameters greater than 10 meters are desired for deep-space communications. This presentation focuses on identifying schemes to lower the cost of constructing networks of large apertures while continuing to meet the requirements for laser communications. The primary emphasis here is on the primary mirror. A slumped glass spherical mirror, along with passive secondary mirror corrector and active adaptive optic corrector show promise as a low-cost alternative to large diameter monolithic apertures. To verify the technical performance and cost estimate, development of a 1.5-meter telescope equipped with gimbal and dome is underway.

  18. Reading the Second Code: Mapping Epigenomes to Understand Plant Growth, Development, and Adaptation to the Environment[OA

    PubMed Central

    2012-01-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  19. Compounding in synthetic aperture imaging.

    PubMed

    Hansen, Jens Munk; Jensen, Jørgen Arendt

    2012-09-01

    A method for obtaining compound images using synthetic aperture data is investigated using a convex array transducer. The new approach allows spatial compounding to be performed for any number of angles without reducing the frame rate or temporal resolution. This important feature is an intrinsic property of how the compound images are constructed using synthetic aperture data and an improvement compared with how spatial compounding is obtained using conventional methods. The synthetic aperture compound images are created by exploiting the linearity of delay-and-sum beamformation for data collected from multiple spherical emissions to synthesize multiple transmit and receive apertures, corresponding to imaging the tissue from multiple directions. The many images are added incoherently, to produce a single compound image. Using a 192-element, 3.5-MHz, λ-pitch transducer, it is demonstrated from tissue-phantom measurements that the speckle is reduced and the contrast resolution improved when applying synthetic aperture compound imaging. At a depth of 4 cm, the size of the synthesized apertures is optimized for lesion detection based on the speckle information density. This is a performance measure for tissue contrast resolution which quantifies the tradeoff between resolution loss and speckle reduction. The speckle information density is improved by 25% when comparing synthetic aperture compounding to a similar setup for compounding using dynamic receive focusing. The cystic resolution and clutter levels are measured using a wire phantom setup and compared with conventional application of the array, as well as to synthetic aperture imaging without compounding. If the full aperture is used for synthetic aperture compounding, the cystic resolution is improved by 41% compared with conventional imaging, and is at least as good as what can be obtained using synthetic aperture imaging without compounding. PMID:23007781

  20. Diagnostic for dynamic aperture

    SciTech Connect

    Morton, P.L.; Pellegrin, J.L.; Raubenheimer, T.; Rivkin, L.; Ross, M.; Ruth, R.D.; Spence, W.L.

    1985-04-01

    In large accelerators and low beta colliding beam storage rings, the strong sextupoles, which are required to correct the chromatic effects, produce strong nonlinear forces which act on particles in the beam. In addition in large hadron storage rings the superconducting magnets have significant nonlinear fields. To understand the effects of these nonlinearities on the particle motion there is currently a large theoretical effort using both analytic techniques and computer tracking. This effort is focused on the determination of the 'dynamic aperture' (the stable acceptance) of both present and future accelerators and storage rings. A great deal of progress has been made in understanding nonlinear particle motion, but very little experimental verification of the theoretical results is available. In this paper we describe 'dynamic tracking', a method being studied at the SPEAR storage ring, which can be used to obtain experimental results which are in a convenient form to be compared with the theoretical predictions.

  1. Multiple instrument distributed aperture sensor (MIDAS) testbed

    NASA Astrophysics Data System (ADS)

    Smith, Eric H.; de Leon, Erich; Dean, Peter; Deloumi, Jake; Duncan, Alan; Hoskins, Warren; Kendrick, Richard; Mason, James; Page, Jeff; Phenis, Adam; Pitman, Joe; Pope, Christine; Privari, Bela; Ratto, Doug; Romero, Enrique; Shu, Ker-Li; Sigler, Robert; Stubbs, David; Tapos, Francisc; Yee, Albert

    2005-08-01

    Lockheed Martin is developing an innovative and adaptable optical telescope comprised of an array of nine identical afocal sub-telescopes. Inherent in the array design is the ability to perform high-resolution broadband imaging, Fizeau Fourier transform spectroscopy (FTS) imaging, and single exposure multi-spectral and polarimetric imaging. Additionally, the sensor suite's modular design integrates multiple science packages for active and passive sensing from 0.4 to 14 microns. We describe the opto-mechanical design of our concept, the Multiple Instrument Distributed Aperture Sensor (MIDAS), and a selection of passive and active remote sensing missions it fulfills.

  2. Optica aperture synthesis

    NASA Astrophysics Data System (ADS)

    van der Avoort, Casper

    2006-05-01

    Optical long baseline stellar interferometry is an observational technique in astronomy that already exists for over a century, but is truly blooming during the last decades. The undoubted value of stellar interferometry as a technique to measure stellar parameters beyond the classical resolution limit is more and more spreading to the regime of synthesis imaging. With optical aperture synthesis imaging, the measurement of parameters is extended to the reconstruction of high resolution stellar images. A number of optical telescope arrays for synthesis imaging are operational on Earth, while space-based telescope arrays are being designed. For all imaging arrays, the combination of the light collected by the telescopes in the array can be performed in a number of ways. In this thesis, methods are introduced to model these methods of beam combination and compare their effectiveness in the generation of data to be used to reconstruct the image of a stellar object. One of these methods of beam combination is to be applied in a future space telescope. The European Space Agency is developing a mission that can valuably be extended with an imaging beam combiner. This mission is labeled Darwin, as its main goal is to provide information on the origin of life. The primary objective is the detection of planets around nearby stars - called exoplanets- and more precisely, Earth-like exoplanets. This detection is based on a signal, rather than an image. With an imaging mode, designed as described in this thesis, Darwin can make images of, for example, the planetary system to which the detected exoplanet belongs or, as another example, of the dust disk around a star out of which planets form. Such images will greatly contribute to the understanding of the formation of our own planetary system and of how and when life became possible on Earth. The comparison of beam combination methods for interferometric imaging occupies most of the pages of this thesis. Additional chapters will

  3. Material Measurements Using Groundplane Apertures

    NASA Technical Reports Server (NTRS)

    Komisarek, K.; Dominek, A.; Wang, N.

    1995-01-01

    A technique for material parameter determination using an aperture in a groundplane is studied. The material parameters are found by relating the measured reflected field in the aperture to a numerical model. Two apertures are studied which can have a variety of different material configurations covering the aperture. The aperture cross-sections studied are rectangular and coaxial. The material configurations involved combinations of single layer and dual layers with or without a resistive exterior resistive sheet. The resistivity of the resistive sheet can be specified to simulate a perfect electric conductor (PEC) backing (0 Ohms/square) to a free space backing (infinity Ohms/square). Numerical parameter studies and measurements were performed to assess the feasibility of the technique.

  4. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    SciTech Connect

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.

  5. Do you really represent my task? Sequential adaptation effects to unexpected events support referential coding for the joint Simon effect.

    PubMed

    Klempova, Bibiana; Liepelt, Roman

    2016-07-01

    Recent findings suggest that a Simon effect (SE) can be induced in Individual go/nogo tasks when responding next to an event-producing object salient enough to provide a reference for the spatial coding of one's own action. However, there is skepticism against referential coding for the joint Simon effect (JSE) by proponents of task co-representation. In the present study, we tested assumptions of task co-representation and referential coding by introducing unexpected double response events in a joint go/nogo and a joint independent go/nogo task. In Experiment 1b, we tested if task representations are functionally similar in joint and standard Simon tasks. In Experiment 2, we tested sequential updating of task co-representation after unexpected single response events in the joint independent go/nogo task. Results showed increased JSEs following unexpected events in the joint go/nogo and joint independent go/nogo task (Experiment 1a). While the former finding is in line with the assumptions made by both accounts (task co-representation and referential coding), the latter finding supports referential coding. In contrast to Experiment 1a, we found a decreased SE after unexpected events in the standard Simon task (Experiment 1b), providing evidence against the functional equivalence assumption between joint and two-choice Simon tasks of the task co-representation account. Finally, we found an increased JSE also following unexpected single response events (Experiment 2), ruling out that the findings of the joint independent go/nogo task in Experiment 1a were due to a re-conceptualization of the task situation. In conclusion, our findings support referential coding also for the joint Simon effect. PMID:25833374

  6. Interferometric synthetic aperture microscopy

    NASA Astrophysics Data System (ADS)

    Ralston, Tyler S.; Marks, Daniel L.; Scott Carney, P.; Boppart, Stephen A.

    2007-02-01

    State-of-the-art methods in high-resolution three-dimensional optical microscopy require that the focus be scanned through the entire region of interest. However, an analysis of the physics of the light-sample interaction reveals that the Fourier-space coverage is independent of depth. Here we show that, by solving the inverse scattering problem for interference microscopy, computed reconstruction yields volumes with a resolution in all planes that is equivalent to the resolution achieved only at the focal plane for conventional high-resolution microscopy. In short, the entire illuminated volume has spatially invariant resolution, thus eliminating the compromise between resolution and depth of field. We describe and demonstrate a novel computational image-formation technique called interferometric synthetic aperture microscopy (ISAM). ISAM has the potential to broadly impact real-time three-dimensional microscopy and analysis in the fields of cell and tumour biology, as well as in clinical diagnosis where in vivo imaging is preferable to biopsy.

  7. Interferometric synthetic aperture microscopy

    NASA Astrophysics Data System (ADS)

    Ralston, Tyler S.

    State-of-the-art interferometric microscopies have problems representing objects that lie outside of the focus because the defocus and diffraction effects are not accounted for in the processing. These problems occur because of the lack of comprehensive models to include the scattering effects in the processing. In this dissertation, a new modality in three-dimensional (3D) optical microscopy, Interferometric Synthetic Aperture Microscopy (ISAM), is introduced to account for the scattering effects. Comprehensive models for interferometric microscopy, such as optical coherence tomography (OCT) are developed, for which forward, adjoint, normal, and inverse operators are formulated. Using an accurate model for the probe beam, the resulting algorithms demonstrate accurate linear estimation of the susceptibility of an object from the interferometric data. Using the regularized least squares solution, an ISAM reconstruction of underlying object structure having spatially invariant resolution is obtained from simulated and experimental interferometric data, even in regions outside of the focal plane of the lens. Two-dimensional (2D) and 3D interferometric data is used to resolve objects outside of the confocal region with minimal loss of resolution, unlike in OCT. Therefore, high-resolution details are recovered from outside of the confocal region. Models and solutions are presented for the planar-scanned, the rotationally scanned, and the full-field illuminated geometry. The models and algorithms presented account for the effects of a finite beam width, the source spectrum, the illumination and collection fields, as well as defocus, diffraction and dispersion effects.

  8. Sparse aperture endoscope

    DOEpatents

    Fitch, Joseph P.

    1999-07-06

    An endoscope which reduces the volume needed by the imaging part thereof, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases the utility thereof. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing.

  9. Sparse aperture endoscope

    DOEpatents

    Fitch, J.P.

    1999-07-06

    An endoscope is disclosed which reduces the volume needed by the imaging part, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases it's utility. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing. 7 figs.

  10. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Technical Reports Server (NTRS)

    Chen, Y. S.; Farmer, R. C.

    1992-01-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  11. Adaptation of multidimensional group particle tracking and particle wall-boundary condition model to the FDNS code

    NASA Astrophysics Data System (ADS)

    Chen, Y. S.; Farmer, R. C.

    1992-04-01

    A particulate two-phase flow CFD model was developed based on the FDNS code which is a pressure based predictor plus multi-corrector Navier-Stokes flow solver. Turbulence models with compressibility correction and the wall function models were employed as submodels. A finite-rate chemistry model was used for reacting flow simulation. For particulate two-phase flow simulations, a Eulerian-Lagrangian solution method using an efficient implicit particle trajectory integration scheme was developed in this study. Effects of particle-gas reaction and particle size change to agglomeration or fragmentation were not considered in this investigation. At the onset of the present study, a two-dimensional version of FDNS which had been modified to treat Lagrangian tracking of particles (FDNS-2DEL) had already been written and was operational. The FDNS-2DEL code was too slow for practical use, mainly because it had not been written in a form amenable to vectorization on the Cray, nor was the full three-dimensional form of FDNS utilized. The specific objective of this study was to reorder to calculations into long single arrays for automatic vectorization on the Cray and to implement the full three-dimensional version of FDNS to produce the FDNS-3DEL code. Since the FDNS-2DEL code was slow, a very limited number of test cases had been run with it. This study was also intended to increase the number of cases simulated to verify and improve, as necessary, the particle tracking methodology coded in FDNS.

  12. Lucky imaging and aperture synthesis with low-redundancy apertures.

    PubMed

    Ward, Jennifer E; Rhodes, William T; Sheridan, John T

    2009-01-01

    Lucky imaging, used with some success in astronomical and even horizontal-path imaging, relies on fleeting conditions of the atmosphere that allow momentary improvements in image quality, at least in portions of an image. Aperture synthesis allows a larger aperture and, thus, a higher-resolution imaging system to be synthesized through the superposition of image spatial-frequency components gathered by cooperative combinations of smaller subapertures. A combination of lucky imaging and aperture synthesis strengthens both methods for obtaining improved images through the turbulent atmosphere. We realize the lucky imaging condition appropriate for aperture synthesis imaging for a pair of rectangular subapertures and demonstrate that this condition occurs when the signal energy associated with bandpass spatial-frequency components achieves its maximum value. PMID:19107157

  13. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  14. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    SciTech Connect

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  15. Growth and evolution of small porous icy bodies with an adaptive-grid thermal evolution code. I. Application to Kuiper belt objects and Enceladus

    NASA Astrophysics Data System (ADS)

    Prialnik, Dina; Merk, Rainer

    2008-09-01

    We present a new 1-dimensional thermal evolution code suited for small icy bodies of the Solar System, based on modern adaptive grid numerical techniques, and suited for multiphase flow through a porous medium. The code is used for evolutionary calculations spanning 4.6×10 yr of a growing body made of ice and rock, starting with a 10 km radius seed and ending with an object 250 km in radius. Initial conditions are chosen to match two different classes of objects: a Kuiper belt object, and Saturn's moon Enceladus. Heating by the decay of 26Al, as well as long-lived radionuclides is taken into account. Several values of the thermal conductivity and accretion laws are tested. We find that in all cases the melting point of ice is reached in a central core. Evaporation and flow of water and vapor gradually remove the water from the core and the final (present) structure is differentiated, with a rocky, highly porous core of 80 km radius (and up to 160 km for very low conductivities). Outside the core, due to refreezing of water and vapor, a compact, ice-rich layer forms, a few tens of km thick (except in the case of very high conductivity). If the ice is initially amorphous, as expected in the Kuiper belt, the amorphous ice is preserved in an outer layer about 20 km thick. We conclude by suggesting various ways in which the code may be extended.

  16. Superresolution and Synthetic Aperture Radar

    SciTech Connect

    DICKEY,FRED M.; ROMERO,LOUIS; DOERRY,ARMIN W.

    2001-05-01

    Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. The application of the concept to synthetic aperture radar is investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. A criterion for judging superresolution processing of an image is presented.

  17. Fabrication of the pinhole aperture for AdaptiSPECT

    PubMed Central

    Kovalsky, Stephen; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical pinhole SPECT imaging system under final construction at the Center for Gamma-Ray Imaging. The system is designed to be able to autonomously change its imaging configuration. The system comprises 16 detectors mounted on translational stages to move radially away and towards the center of the field-of-view. The system also possesses an adaptive pinhole aperture with multiple collimator diameters and pinhole sizes, as well as the possibility to switch between multiplexed and non-multiplexed imaging configurations. In this paper, we describe the fabrication of the AdaptiSPECT pinhole aperture and its controllers. PMID:26146443

  18. Computational study of ion beam extraction phenomena through multiple apertures

    SciTech Connect

    Hu, Wanpeng; Sang, Chaofeng; Tang, Tengfei; Wang, Dezhen; Li, Ming; Jin, Dazhi; Tan, Xiaohua

    2014-03-15

    The process of ion extraction through multiple apertures is investigated using a two-dimensional particle-in-cell code. We consider apertures with a fixed diameter with a hydrogen plasma background, and the trajectories of electrons, H{sup +} and H{sub 2}{sup +} ions in the self-consistently calculated electric field are traced. The focus of this work is the fundamental physics of the ion extraction, and not particular to a specific device. The computed convergence and divergence of the extracted ion beam are analyzed. We find that the extracted ion flux reaching the extraction electrode is non-uniform, and the peak flux positions change according to operational parameters, and do not necessarily match the positions of the apertures in the y-direction. The profile of the ion flux reaching the electrode is mainly affected by the bias voltage and the distance between grid wall and extraction electrode.

  19. MARE2DEM: an open-source code for anisotropic inversion of controlled-source electromagnetic and magnetotelluric data using parallel adaptive 2D finite elements (Invited)

    NASA Astrophysics Data System (ADS)

    Key, K.

    2013-12-01

    This work announces the public release of an open-source inversion code named MARE2DEM (Modeling with Adaptively Refined Elements for 2D Electromagnetics). Although initially designed for the rapid inversion of marine electromagnetic data, MARE2DEM now supports a wide variety of acquisition configurations for both offshore and onshore surveys that utilize electric and magnetic dipole transmitters or magnetotelluric plane waves. The model domain is flexibly parameterized using a grid of arbitrarily shaped polygonal regions, allowing for complicated structures such as topography or seismically imaged horizons to be easily assimilated. MARE2DEM efficiently solves the forward problem in parallel by dividing the input data parameters into smaller subsets using a parallel data decomposition algorithm. The data subsets are then solved in parallel using an automatic adaptive finite element method that iterative solves the forward problem on successively refined finite element meshes until a specified accuracy tolerance is met, thus freeing the end user from the burden of designing an accurate numerical modeling grid. Regularized non-linear inversion for isotropic or anisotropic conductivity is accomplished with a new implementation of Occam's method referred to as fast-Occam, which is able to minimize the objective function in much fewer forward evaluations than the required by the original method. This presentation will review the theoretical considerations behind MARE2DEM and use a few recent offshore EM data sets to demonstrate its capabilities and to showcase the software interface tools that streamline model building and data inversion.

  20. FESDIF -- Finite Element Scalar Diffraction theory code

    SciTech Connect

    Kraus, H.G.

    1992-09-01

    This document describes the theory and use of a powerful scalar diffraction theory based computer code for calculation of intensity fields due to diffraction of optical waves by two-dimensional planar apertures and lenses. This code is called FESDIF (Finite Element Scalar Diffraction). It is based upon both Fraunhofer and Kirchhoff scalar diffraction theories. Simplified routines for circular apertures are included. However, the real power of the code comes from its basis in finite element methods. These methods allow the diffracting aperture to be virtually any geometric shape, including the various secondary aperture obstructions present in telescope systems. Aperture functions, with virtually any phase and amplitude variations, are allowed in the aperture openings. Step change aperture functions are accommodated. The incident waves are considered to be monochromatic. Plane waves, spherical waves, or Gaussian laser beams may be incident upon the apertures. Both area and line integral transformations were developed for the finite element based diffraction transformations. There is some loss of aperture function generality in the line integral transformations which are typically many times more computationally efficient than the area integral transformations when applicable to a particular problem.

  1. Multiple-Aperture-Based Solar Seeing Profiler

    NASA Astrophysics Data System (ADS)

    Ren, Deqing; Zhao, Gang; Zhang, Xi; Dou, Jiangpei; Chen, Rui; Zhu, Yongtian; Yang, Feng

    2015-09-01

    Characterization of day-time atmospheric turbulence profiles up to 30 km above the telescope is crucial for designs and performance estimations of future solar multiconjugate adaptive optics (MCAO) systems. Recently, the S-DIMM+ method has been successfully used to measure the vertical profile of turbulence. However, to measure profile up to 30 km employing the S-DIMM+ method, a telescope with a diameter of at least 1.0 m is needed, which restricts the usage of S-DIMM+, since large telescopes are scarce and their time is limited. To solve this problem, we introduce the multiple-aperture seeing profiler (MASP), which consists of two portable small telescopes instead of a single large aperture. Numerical simulations are carried out to evaluate the performance of MASP. We find that for one layer case, MASP can retrieve the seeing with error ~5% using 800 frames of wavefront sensor (WFS) data, which is quite similar to the results of a telescope with diameter of 1120 mm. We also simulate profiles with four turbulence layers, and find that our MASP can effectively retrieve the strengths and heights of the four turbulence layers. Since previous measurements at Big Bear Solar Observatory showed that day-time turbulence profile typically consists of four layers, the MASP we introduced is sufficient for actual seeing measurement.

  2. Metamaterial Apertures for Computational Imaging

    NASA Astrophysics Data System (ADS)

    Hunt, John; Driscoll, Tom; Mrozack, Alex; Lipworth, Guy; Reynolds, Matthew; Brady, David; Smith, David R.

    2013-01-01

    By leveraging metamaterials and compressive imaging, a low-profile aperture capable of microwave imaging without lenses, moving parts, or phase shifters is demonstrated. This designer aperture allows image compression to be performed on the physical hardware layer rather than in the postprocessing stage, thus averting the detector, storage, and transmission costs associated with full diffraction-limited sampling of a scene. A guided-wave metamaterial aperture is used to perform compressive image reconstruction at 10 frames per second of two-dimensional (range and angle) sparse still and video scenes at K-band (18 to 26 gigahertz) frequencies, using frequency diversity to avoid mechanical scanning. Image acquisition is accomplished with a 40:1 compression ratio.

  3. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology.

    PubMed

    Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion

    2013-08-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign. PMID:24443642

  4. Adaptive coding of the value of social cues with oxytocin, an fMRI study in autism spectrum disorder.

    PubMed

    Andari, Elissar; Richard, Nathalie; Leboyer, Marion; Sirigu, Angela

    2016-03-01

    The neuropeptide oxytocin (OT) is one of the major targets of research in neuroscience, with respect to social functioning. Oxytocin promotes social skills and improves the quality of face processing in individuals with social dysfunctions such as autism spectrum disorder (ASD). Although one of OT's key functions is to promote social behavior during dynamic social interactions, the neural correlates of this function remain unknown. Here, we combined acute intranasal OT (IN-OT) administration (24 IU) and fMRI with an interactive ball game and a face-matching task in individuals with ASD (N = 20). We found that IN-OT selectively enhanced the brain activity of early visual areas in response to faces as compared to non-social stimuli. OT inhalation modulated the BOLD activity of amygdala and hippocampus in a context-dependent manner. Interestingly, IN-OT intake enhanced the activity of mid-orbitofrontal cortex in response to a fair partner, and insula region in response to an unfair partner. These OT-induced neural responses were accompanied by behavioral improvements in terms of allocating appropriate feelings of trust toward different partners' profiles. Our findings suggest that OT impacts the brain activity of key areas implicated in attention and emotion regulation in an adaptive manner, based on the value of social cues. PMID:26872344

  5. A flat laser array aperture

    NASA Astrophysics Data System (ADS)

    Papadakis, Stergios J.; Ricciardi, Gerald F.; Gross, Michael C.; Krill, Jerry A.

    2010-04-01

    We describe a design concept for a flat (or conformal) thin-plate laser phased-array aperture. The aperture consists of a substrate supporting a grid of single-mode optical waveguides fabricated from a linear electro-optic material. The waveguides are coupled to a single laser source or detector. An arrangement of electrodes provides for two-dimensional beam steering by controlling the phase of the light entering the grid. The electrodes can also be modulated to simultaneously provide atmospheric turbulence modulation for long-range free-space optical communication. An approach for fabrication is also outlined.

  6. High resolution non-iterative aperture synthesis.

    PubMed

    Kraczek, Jeffrey R; McManamon, Paul F; Watson, Edward A

    2016-03-21

    The maximum resolution of a multiple-input multiple-output (MIMO) imaging system is determined by the size of the synthetic aperture. The synthetic aperture is determined by a coordinate shift using the relative positions of the illuminators and receive apertures. Previous methods have shown non-iterative phasing for multiple illuminators with a single receive aperture for intra-aperture synthesis. This work shows non-iterative phasing with both multiple illuminators and multiple receive apertures for inter-aperture synthesis. Simulated results show that piston, tip, and tilt can be calculated using inter-aperture phasing after intra-aperture phasing has been performed. Use of a fourth illuminator for increased resolution is shown. The modulation transfer function (MTF) is used to quantitatively judge increased resolution. PMID:27136816

  7. Military adaptation of commercial items: laboratory evaluation of the Code E-436 engine. Technical report 17 January-28 July 1983

    SciTech Connect

    Rimpela, R.J.G.

    1984-02-01

    The engine was installed in a dynamometer test cell at US Army Tank-Automotive Command (TACOM) and conventional dynamometer testing procedures were used to determine basic engine characteristics. The characteristics determined were full load performance, fuel economy at full load and part load, engine oil consumption, and engine heat rejection. During pre-endurance testing, the Code E-436 engine produced 378 observed kW (506.4 BHP) at full load, at rated speed of 2,600 RPM. The maximum torque during full load operation was 1439 Nm (1061 1b-ft) at 2,400 RPM. Minimum brake specific fuel consumption at full load occurred at 2,200 RPM and was 217 g/KWH (0.356 1b/BHP-HR). After the NATO Endurance Test the engine produced 375.1 observed kW (503.0 BHP) at full load and rated speed. The maximum torque was 1423.8 Nm (1050 1b-ft) at 2400 RPM. The total lube oil consumption during the 400-hour NATO endurance was 19.7 kgs (43.4 lbs). Following the endurance test visual and dimensional inspection of the engine revealed all major engine parts to be in excellent condition except for pistons. Five out of eight pistons developed cracks in the pin bores. Though the engine completed the endurance test (400 hours) and was operated for a total of 582 hours, the engine is considered as having failed the 400-hour NATO test due to piston failure.

  8. Dynamic aperture measurement on Aladdin

    SciTech Connect

    Bridges, J.; Cho, Y.; Chou, W.; Crosbie, E.; Kramer, S.; Kustom, R.; Voss, D.; Teng, L.; Kleman, K.; Otte, R.; Trzeciak, W.; Symon, K.; Wisconsin Univ., Stoughton, WI . Synchrotron Radiation Center; Wisconsin Univ., Madison, WI . Dept. of Physics)

    1989-01-01

    The sextupole-induced non-linear transverse beam dynamics in the synchrotron radiation storage ring Aladdin is studied. Specifically, the dynamic aperture is measured as function of the sextupole strength. The results agree reasonably well with computer simulations. 1 ref., 8 figs., 1 tab.

  9. Large aperture diffractive space telescope

    DOEpatents

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  10. Future of synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Barath, F. T.

    1978-01-01

    The present status of the applications of Synthetic Aperture Radars (SARs) is reviewed, and the technology state-of-the art as represented by the Seasat-A and SIR-A SARs examined. The potential of SAR applications, and the near- and longer-term technology trends are assessed.

  11. SEASAT Synthetic Aperture Radar Data

    NASA Technical Reports Server (NTRS)

    Henderson, F. M.

    1981-01-01

    The potential of radar imagery from space altitudes is discussed and the advantages of radar over passive sensor systems are outlined. Specific reference is made to the SEASAT synthetic aperture radar. Possible applications include oil spill monitoring, snow and ice reconnaissance, mineral exploration, and monitoring phenomena in the urban environment.

  12. Impact of aperturing and pixel size on XPCS using AGIPD

    NASA Astrophysics Data System (ADS)

    Becker, J.; Graafsma, H.

    2012-02-01

    A case study for the Adaptive Gain Integrating Pixel Detector (AGIPD) at the European XFEL employing the intensity autocorrelation technique was performed using the detector simulation tool HORUS. The study compares the AGIPD (pixel size of (200 μm)2) to a possible apertured version of the detector and to a hypothetical system with 100 μm pixel size and investigates the influence of intensity fluctuations and incoherent noise on the quality of the acquired data.

  13. Fresnel diffraction of aperture with rough edge

    NASA Astrophysics Data System (ADS)

    Cui, Yuwei; Zhang, Wei; Wang, Junhong; Zhang, Meina; Teng, Shuyun

    2015-06-01

    The Fresnel diffraction of an aperture with a rough edge is studied in this paper. Circular and elliptical apertures with sinusoidal and random edges are chosen as examples to investigate the influence of the aperture edge on the diffraction. The numerical calculation results indicate intuitively the variations of the transverse and longitude diffraction intensity distributions with the edge parameters of the aperture. The data files of aperture models are obtained through the numerical calculations, and the aperture samples are obtained with the help of a liquid crystal light modulator (LCLM). Thus, the practical experiments of the diffractions of apertures with rough edges are carried out. The measured results are consistent with the calculated ones. The approximate analytic expressions of the diffraction by the modified aperture are deduced on the basis of the Fresnel diffraction theory and the statistic optics, and the reasonable explanations for the influence of edge parameters on the diffraction are given through the theoretical analysis.

  14. Alternative aperture stop position designs for SIRTF

    NASA Technical Reports Server (NTRS)

    Davis, Paul K.; Dinger, Ann S.

    1990-01-01

    Three designs of the Space Infrared Telescope Facility (SIRTF) for a 100,000 high earth orbit are considered with particular attention given to the evaluation of the aperture stop position. The choice of aperture stop position will be based on stray light considerations which are being studied concurrently. It is noted that there are advantages in cost, mass, and astronomical aperture to placing the aperture stop at or near the primary mirror, if the stray light circumstances allow.

  15. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  16. VSATs - Very small aperture terminals

    NASA Astrophysics Data System (ADS)

    Everett, John L.

    The present volume on very small aperture terminals (VSATs) discusses antennas, semiconductor devices, and traveling wave tubes and amplifiers for VSAT systems, VSAT low noise downconverters, and modems and codecs for VSAT systems. Attention is given to multiaccess protocols for VSAT networks, protocol software in Ku-band VSAT network systems, system design of VSAT data networks, and the policing of VSAT networks. Topics addressed include the PANDATA and PolyCom systems, APOLLO - a satellite-based information distribution system, data broadcasting within a satellite television channel, and the NEC NEXTAR VSAT system. Also discussed are small aperture military ground terminals, link budgets for VSAT systems, capabilities and experience of a VSAT service provider, and developments in VSAT regulation.

  17. Broadband synthetic aperture geoacoustic inversion.

    PubMed

    Tan, Bien Aik; Gerstoft, Peter; Yardim, Caglar; Hodgkiss, William S

    2013-07-01

    A typical geoacoustic inversion procedure involves powerful source transmissions received on a large-aperture receiver array. A more practical approach is to use a single moving source and/or receiver in a low signal to noise ratio (SNR) setting. This paper uses single-receiver, broadband, frequency coherent matched-field inversion and exploits coherently repeated transmissions to improve estimation of the geoacoustic parameters. The long observation time creates a synthetic aperture due to relative source-receiver motion. This approach is illustrated by studying the transmission of multiple linear frequency modulated (LFM) pulses which results in a multi-tonal comb spectrum that is Doppler sensitive. To correlate well with the measured field across a receiver trajectory and to incorporate transmission from a source trajectory, waveguide Doppler and normal mode theory is applied. The method is demonstrated with low SNR, 100-900 Hz LFM pulse data from the Shallow Water 2006 experiment. PMID:23862809

  18. Large aperture scanning airborne lidar

    NASA Technical Reports Server (NTRS)

    Smith, J.; Bindschadler, R.; Boers, R.; Bufton, J. L.; Clem, D.; Garvin, J.; Melfi, S. H.

    1988-01-01

    A large aperture scanning airborne lidar facility is being developed to provide important new capabilities for airborne lidar sensor systems. The proposed scanning mechanism allows for a large aperture telescope (25 in. diameter) in front of an elliptical flat (25 x 36 in.) turning mirror positioned at a 45 degree angle with respect to the telescope optical axis. The lidar scanning capability will provide opportunities for acquiring new data sets for atmospheric, earth resources, and oceans communities. This completed facility will also make available the opportunity to acquire simulated EOS lidar data on a near global basis. The design and construction of this unique scanning mechanism presents exciting technological challenges of maintaining the turning mirror optical flatness during scanning while exposed to extreme temperatures, ambient pressures, aircraft vibrations, etc.

  19. Application of Modern Aperture Integration (AI) and Geometrical Theory of Diffraction (GTD) Techniques for Analysis of Large Reflector Antennas

    NASA Technical Reports Server (NTRS)

    Rudduck, R. C.

    1985-01-01

    The application of aperture integration (AI) and geometrical theory of diffraction (GTO) techniques to analyze large reflector antennas is outlined. The following techniques were used: computer modeling, validation of analysis and computer codes, computer aided design modifications, limitation on the conventional aperture integration (AIC) method, extended aperture integration (AIE) method, the AIE method for feed scattering calculations, near field probing predictions for 15 meter model, limitation on AIC for surface tolerance effects, aperture integration on the surface (AIS) method, and AIC and GTD calculations for compact range reflector.

  20. Multiple aperture imager component development

    NASA Astrophysics Data System (ADS)

    Lees, David E.; Henshaw, Philip D.

    1991-03-01

    This final report presents results of an experimental and analytical effort to develop multiple aperture imagers built from unphased, direct-detection subapertures. An object was imaged using wave length shift instead of object motion to create multiple speckle pattern realizations. An analysis of subaperture geometry effects of autocorrelation estimate was performed. Experimental measurements of detector modulator transfer function were made. Finally, a new algorithm to reconstruct imagery with improved signal-to-noise ratio was developed.

  1. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  2. Coherent sub-aperture ultraviolet imagery

    NASA Astrophysics Data System (ADS)

    Morton, R. G.; Connally, W. J.; Avicola, K.; Monjo, D.; Olson, T.

    1989-09-01

    Laboratory targets have been imaged by a multi-sub-aperture, coherent receiver technique in which a common local oscillator illuminates the sub-aperture array to preserve both phase and intensity information. The target, receiver and range dimensions were chosen such that each sub-aperture was smaller than the speckle size. Various targets were illuminated by microsecond pulses from an e-beam pumped XeF power amplifier, which was seeded by a coherent ultraviolet beam generated with a frequency doubled visible dye laser. Data is presented showing comparisons between the coherent multi-sub-aperture approach and conventional, full aperture photography of the same traget(s).

  3. 3D synthetic aperture for controlled-source electromagnetics

    NASA Astrophysics Data System (ADS)

    Knaak, Allison

    Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets

  4. Controlled-aperture wave-equation migration

    SciTech Connect

    Huang, L.; Fehler, Michael C.; Sun, H.; Li, Z.

    2003-01-01

    We present a controlled-aperture wave-equation migration method that no1 only can reduce migration artiracts due to limited recording aperlurcs and determine image weights to balance the efl'ects of limited-aperture illumination, but also can improve thc migration accuracy by reducing the slowness perturbations within thc controlled migration regions. The method consists of two steps: migration aperture scan and controlled-aperture migration. Migration apertures for a sparse distribution of shots arc determined using wave-equation migration, and those for the other shots are obtained by interpolation. During the final controlled-aperture niigration step, we can select a reference slowness in c;ontrollecl regions of the slowness model to reduce slowncss perturbations, and consequently increase the accuracy of wave-equation migration inel hods that makc use of reference slownesses. In addition, the computation in the space domain during wavefield downward continuation is needed to be conducted only within the controlled apertures and therefore, the computational cost of controlled-aperture migration step (without including migration aperture scan) is less than the corresponding uncontrolled-aperture migration. Finally, we can use the efficient split-step Fourier approach for migration-aperture scan, then use other, more accurate though more expensive, wave-equation migration methods to perform thc final controlled-apertio.ee migration to produce the most accurate image.

  5. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  6. Stereoscopic full aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Strocovsky, Sergio G.; Otero, Dino

    2011-06-01

    Images of planar scintigraphy and single photon emission computerized tomography (SPECT) used in nuclear medicine are often low quality. They usually appear to be blurred and noisy. This problem is due to the low spatial resolution and poor sensitivity of the acquisition technique with the gamma camera (GC). Other techniques, such as coded aperture imaging (CAI) reach higher spatial resolutions than GC. However, CAI is not frequently used for imaging in nuclear medicine, due to the decoding complexity of some images and the difficulty in controlling the noise magnitude. Summing up, the images obtained through GC are low quality and it is still difficult to implement CAI technique. A novel technique, full aperture Imaging (FAI), also uses gamma ray-encoding to obtain images, but the coding system and the method of images reconstruction are simpler than those used in CAI. In addition, FAI also reaches higher spatial resolution than GC. In this work, the principles of FAI technique and the method of images reconstruction are explained in detail. The FAI technique is tested by means of Monte Carlo simulations with filiform and spherical sources. Spatial resolution tests of GC versus FAI were performed using two different source-detector distances. First, simulations were made without interposing any material between the sources and the detector. Then, other more realistic simulations were made. In these, the sources were placed in the centre of a rectangular prismatic region, filled with water. A rigorous comparison was made between GC and FAI images of the linear filiform sources, by means of two methods: mean fluence profile graphs and correlation tests. Finally, three-dimensional capacity of FAI was tested with two spherical sources. The results show that FAI technique has greater sensitivity (>100 times) and greater spatial resolution (>2.6 times) than that of GC with LEHR collimator, in both cases, with and without attenuating material and long and short

  7. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509

  8. Asymptotic modeling of synthetic aperture ladar sensor phenomenology

    NASA Astrophysics Data System (ADS)

    Neuroth, Robert M.; Rigling, Brian D.; Zelnio, Edmund G.; Watson, Edward A.; Velten, Vincent J.; Rovito, Todd V.

    2015-05-01

    Interest in the use of active electro-optical(EO) sensors for non-cooperative target identification has steadily increased as the quality and availability of EO sources and detectors have improved. A unique and recent innovation has been the development of an airborne synthetic aperture imaging capability at optical wavelengths. To effectively exploit this new data source for target identification, one must develop an understanding of target-sensor phenomenology at those wavelengths. Current high-frequency, asymptotic EM predictors are computationally intractable for such conditions, as their ray density is inversely proportional to wavelength. As a more efficient alternative, we have developed a geometric optics based simulation for synthetic aperture ladar that seeks to model the second order statistics of the diffuse scattering commonly found at those wavelengths but with much lesser ray density. Code has been developed, ported to high-performance computing environments, and tested on a variety of target models.

  9. Aperture scanning Fourier ptychographic microscopy

    PubMed Central

    Ou, Xiaoze; Chung, Jaebum; Horstmeyer, Roarke; Yang, Changhuei

    2016-01-01

    Fourier ptychographic microscopy (FPM) is implemented through aperture scanning by an LCOS spatial light modulator at the back focal plane of the objective lens. This FPM configuration enables the capturing of the complex scattered field for a 3D sample both in the transmissive mode and the reflective mode. We further show that by combining with the compressive sensing theory, the reconstructed 2D complex scattered field can be used to recover the 3D sample scattering density. This implementation expands the scope of application for FPM and can be beneficial for areas such as tissue imaging and wafer inspection. PMID:27570705

  10. Dual aperture multispectral Schmidt objective

    NASA Technical Reports Server (NTRS)

    Minott, P. O. (Inventor)

    1984-01-01

    A dual aperture, off-axis catadioptic Schmidt objective is described. It is formed by symmetrically aligning two pairs of Schmidt objectives on opposite sides of a common plane (x,z). Each objective has a spherical primary mirror with a spherical focal plane and center of curvature aligned along an optic axis laterally spaced apart from the common plane. A multiprism beamsplitter with buried dichroic layers and a convex entrance and concave exit surfaces optically concentric to the center of curvature may be positioned at the focal plane. The primary mirrors of each objective may be connected rigidly together and may have equal or unequal focal lengths.

  11. the Large Aperture GRB Observatory

    SciTech Connect

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  12. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  13. Resonant Effects in Nanoscale Bowtie Apertures

    NASA Astrophysics Data System (ADS)

    Ding, Li; Qin, Jin; Guo, Songpo; Liu, Tao; Kinzel, Edward; Wang, Liang

    2016-06-01

    Nanoscale bowtie aperture antennas can be used to focus light well below the diffraction limit with extremely high transmission efficiencies. This paper studies the spectral dependence of the transmission through nanoscale bowtie apertures defined in a silver film. A realistic bowtie aperture is numerically modeled using the Finite Difference Time Domain (FDTD) method. Results show that the transmission spectrum is dominated by Fabry-Pérot (F-P) waveguide modes and plasmonic modes. The F-P resonance is sensitive to the thickness of the film and the plasmonic resonant mode is closely related to the gap distance of the bowtie aperture. Both characteristics significantly affect the transmission spectrum. To verify these numerical results, bowtie apertures are FIB milled in a silver film. Experimental transmission measurements agree with simulation data. Based on this result, nanoscale bowtie apertures can be optimized to realize deep sub-wavelength confinement with high transmission efficiency with applications to nanolithography, data storage, and bio-chemical sensing.

  14. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  15. DAVINCI a Dilute Aperture Coronagraph

    NASA Astrophysics Data System (ADS)

    Shao, Michael

    2009-01-01

    The motivation for DAVINCI was originally to make use of the technology developed for space interferometers like SIM to build a coronagraph from four 1.1m telescopes that was dramatically lower in cost than a 4 5m filled aperture offaxis coronagraph. Our initial studies through team X have shown this cost savings to be real. But a more careful analysis showed that DAVINCI would have an inner working angle of 35mas a factor of 2 smaller than a 2 lambda/D 4 meter coronagraph or 70m external occulter, resulting in a 10X increase in the number of potential Earth-Clone targets. DAVINCI uses a nulling interferometer as a coronagraph, a nulling interferometer is one the few coronagraph architectures that are compatible with segmented and dilute aperture telescopes. Combined with a post coronagraph wavefront sensor several ultra-demanding tolerances of conventional coronagraphs can be relaxed by factors of 100. The post coronagraph wavefront sensor is also much less affected by local and exozodi background than wavefront sensors that use the science camera as the wavefront sensor. The post coronagraph interferometer is also used on ground based extreme AO coronagraphs, GPI, and P1640.

  16. Synthetic aperture sonar image statistics

    NASA Astrophysics Data System (ADS)

    Johnson, Shawn F.

    Synthetic Aperture Sonar (SAS) systems are capable of producing photograph quality seafloor imagery using a lower frequency than other systems of comparable resolution. However, as with other high-resolution sonar systems, SAS imagery is often characterized by heavy-tailed amplitude distributions which may adversely affect target detection systems. The constant cross-range resolution with respect to range that results from the synthetic aperture formation process provides a unique opportunity to improve our understanding of system and environment interactions, which is essential for accurate performance prediction. This research focused on the impact of multipath contamination and the impact of resolution on image statistics, accomplished through analyses of data collected during at-sea experiments, analytical modeling, and development of numerical simulations. Multipath contamination was shown to have an appreciable impact on image statistics at ranges greater than the water depth and when the levels of the contributing multipath are within 10 dB of the direct path, reducing the image amplitude distribution tails while also degrading image clarity. Image statistics were shown to depend strongly upon both system resolution and orientation to seafloor features such as sand ripples. This work contributes to improving detection systems by aiding understanding of the influences of background (i.e. non-target) image statistics.

  17. Compressible Astrophysics Simulation Code

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  18. Diffraction smoothing aperture for an optical beam

    DOEpatents

    Judd, O'Dean P.; Suydam, Bergen R.

    1976-01-01

    The disclosure is directed to an aperture for an optical beam having an irregular periphery or having perturbations imposed upon the periphery to decrease the diffraction effect caused by the beam passing through the aperture. Such apertures are particularly useful with high power solid state laser systems in that they minimize the problem of self-focusing which frequently destroys expensive components in such systems.

  19. Particle-in-Cell Modeling of Magnetized Argon Plasma Flow Through Small Mechanical Apertures

    SciTech Connect

    Adam B. Sefkow and Samuel A. Cohen

    2009-04-09

    Motivated by observations of supersonic argon-ion flow generated by linear helicon-heated plasma devices, a three-dimensional particle-in-cell (PIC) code is used to study whether stationary electrostatic layers form near mechanical apertures intersecting the flow of magnetized plasma. By self-consistently evaluating the temporal evolution of the plasma in the vicinity of the aperture, the PIC simulations characterize the roles of the imposed aperture and applied magnetic field on ion acceleration. The PIC model includes ionization of a background neutral-argon population by thermal and superthermal electrons, the latter found upstream of the aperture. Near the aperture, a transition from a collisional to a collisionless regime occurs. Perturbations of density and potential, with mm wavelengths and consistent with ion acoustic waves, propagate axially. An ion acceleration region of length ~ 200-300 λD,e forms at the location of the aperture and is found to be an electrostatic double layer, with axially-separated regions of net positive and negative charge. Reducing the aperture diameter or increasing its length increases the double layer strength.

  20. Ion mobility spectrometer with virtual aperture grid

    DOEpatents

    Pfeifer, Kent B.; Rumpf, Arthur N.

    2010-11-23

    An ion mobility spectrometer does not require a physical aperture grid to prevent premature ion detector response. The last electrodes adjacent to the ion collector (typically the last four or five) have an electrode pitch that is less than the width of the ion swarm and each of the adjacent electrodes is connected to a source of free charge, thereby providing a virtual aperture grid at the end of the drift region that shields the ion collector from the mirror current of the approaching ion swarm. The virtual aperture grid is less complex in assembly and function and is less sensitive to vibrations than the physical aperture grid.

  1. Simultaneous displacement and slope measurement in electronic speckle pattern interferometry using adjustable aperture multiplexing.

    PubMed

    Lu, Min; Wang, Shengjia; Aulbach, Laura; Koch, Alexander W

    2016-08-01

    This paper suggests the use of adjustable aperture multiplexing (AAM), a method which is able to introduce multiple tunable carrier frequencies into a three-beam electronic speckle pattern interferometer to measure the out-of-plane displacement and its first-order derivative simultaneously. In the optical arrangement, two single apertures are located in the object and reference light paths, respectively. In cooperation with two adjustable mirrors, virtual images of the single apertures construct three pairs of virtual double apertures with variable aperture opening sizes and aperture distances. By setting the aperture parameter properly, three tunable spatial carrier frequencies are produced within the speckle pattern and completely separate the information of three interferograms in the frequency domain. By applying the inverse Fourier transform to a selected spectrum, its corresponding phase difference distribution can thus be evaluated. Therefore, we can obtain the phase map due to the deformation as well as its slope of the test surface from two speckle patterns which are recorded at different loading events. By this means, simultaneous and dynamic measurements are realized. AAM has greatly simplified the measurement system, which contributes to improving the system stability and increasing the system flexibility and adaptability to various measurement requirements. This paper presents the AAM working principle, the phase retrieval using spatial carrier frequency, and preliminary experimental results. PMID:27505365

  2. Multiple-Aperture Based Solar Seeing Profiler

    NASA Astrophysics Data System (ADS)

    Zhao, Gang; Ren, Deqing

    2015-08-01

    Characterization of daytime atmospheric turbulence profile up to 30 km above the telescope is crucial for designs and performance estimations of solar Multi-Conjugate Adaptive Optics (MCAO) systems. To measure seeing profiles up to 30km, we introduce the Multiple Aperture Seeing Profiler (MASP). It bases on the principle of S-DIMM+ and consists of two portable small telescopes similar to SHABAR. Thus the MASP take the advantages of both S-DIMM+ and SHABAR. It is portable and can be used without big telescope, while it has ability to measure turbulence profile up to 30km. Numerical simulations are carried out to evaluate the performance of MASP. We find that for one layer case, MASP can retrieve the seeing with error ~5% using 800 frames of WFS data, which is quite similar with the results of a telescope with diameter of 1120mm. We also simulate profiles with four turbulence layers, and find that our MASP can well retrieve the strengths and heights of the four turbulence layers. Since previous measurements at BBSO showed that daytime turbulence profile typically consists of four layers, MASP we introduced is sufficient for actual seeing measurement.

  3. Modal wavefront reconstruction over general shaped aperture by numerical orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Jingfei; Li, Xinhua; Gao, Zhishan; Wang, Shuai; Sun, Wenqing; Wang, Wei; Yuan, Qun

    2015-03-01

    In practical optical measurements, the wavefront data are recorded by pixelated imaging sensors. The closed-form analytical base polynomial will lose its orthogonality in the discrete wavefront database. For a wavefront with an irregularly shaped aperture, the corresponding analytical base polynomials are laboriously derived. The use of numerical orthogonal polynomials for reconstructing a wavefront with a general shaped aperture over the discrete data points is presented. Numerical polynomials are orthogonal over the discrete data points regardless of the boundary shape of the aperture. The performance of numerical orthogonal polynomials is confirmed by theoretical analysis and experiments. The results demonstrate the adaptability, validity, and accuracy of numerical orthogonal polynomials for estimating the wavefront over a general shaped aperture from regular boundary to an irregular boundary.

  4. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    NASA Astrophysics Data System (ADS)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  5. Multifocal interferometric synthetic aperture microscopy

    PubMed Central

    Xu, Yang; Chng, Xiong Kai Benjamin; Adie, Steven G.; Boppart, Stephen A.; Scott Carney, P.

    2014-01-01

    There is an inherent trade-off between transverse resolution and depth of field (DOF) in optical coherence tomography (OCT) which becomes a limiting factor for certain applications. Multifocal OCT and interferometric synthetic aperture microscopy (ISAM) each provide a distinct solution to the trade-off through modification to the experiment or via post-processing, respectively. In this paper, we have solved the inverse problem of multifocal OCT and present a general algorithm for combining multiple ISAM datasets. Multifocal ISAM (MISAM) uses a regularized combination of the resampled datasets to bring advantages of both multifocal OCT and ISAM to achieve optimal transverse resolution, extended effective DOF and improved signal-to-noise ratio. We present theory, simulation and experimental results. PMID:24977909

  6. Multifocal interferometric synthetic aperture microscopy.

    PubMed

    Xu, Yang; Chng, Xiong Kai Benjamin; Adie, Steven G; Boppart, Stephen A; Carney, P Scott

    2014-06-30

    There is an inherent trade-off between transverse resolution and depth of field (DOF) in optical coherence tomography (OCT) which becomes a limiting factor for certain applications. Multifocal OCT and interferometric synthetic aperture microscopy (ISAM) each provide a distinct solution to the trade-off through modification to the experiment or via post-processing, respectively. In this paper, we have solved the inverse problem of multifocal OCT and present a general algorithm for combining multiple ISAM datasets. Multifocal ISAM (MISAM) uses a regularized combination of the resampled datasets to bring advantages of both multifocal OCT and ISAM to achieve optimal transverse resolution, extended effective DOF and improved signal-to-noise ratio. We present theory, simulation and experimental results. PMID:24977909

  7. Large aperture Fresnel telescopes/011

    SciTech Connect

    Hyde, R.A., LLNL

    1998-07-16

    At Livermore we`ve spent the last two years examining an alternative approach towards very large aperture (VLA) telescopes, one based upon transmissive Fresnel lenses rather than on mirrors. Fresnel lenses are attractive for VLA telescopes because they are launchable (lightweight, packagable, and deployable) and because they virtually eliminate the traditional, very tight, surface shape requirements faced by reflecting telescopes. Their (potentially severe) optical drawback, a very narrow spectral bandwidth, can be eliminated by use of a second (much smaller) chromatically-correcting Fresnel element. This enables Fresnel VLA telescopes to provide either single band ({Delta}{lambda}/{lambda} {approximately} 0.1), multiple band, or continuous spectral coverage. Building and fielding such large Fresnel lenses will present a significant challenge, but one which appears, with effort, to be solvable.

  8. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  9. Speckle reduction in synthetic-aperture-radar imagery.

    PubMed

    Harvey, E R; April, G V

    1990-07-01

    Speckle appearing in synthetic-aperture-radar images degrades the information contained in these images. Speckle noise can be suppressed by adapted local processing techniques, permitting the definition of statistical parameters inside a small window centered on each pixel of the image. Two processing algorithms are examined; the first one uses the intensity as a variable, and the second one works on a homomorphic transformation of the image intensity. A statistical model for speckle noise that takes into account correlation in multilook imagery has been used to develop these processing algorithms. Several experimental results of processed Seasat-A syntheticaperture-radar images are discussed. PMID:19768064

  10. A systematic review of aperture shapes

    NASA Astrophysics Data System (ADS)

    Schultz, A. B.; Frazier, T. V.

    The paper discusses the application of apodization to reflecting telescopes. The diffraction pattern of a telescope, which is the image of a star, can be changed considerably by using different aperture shapes in combination with appropriately shaped occulting masks on the optical axis. Aperture shapes studied were the circular, square, and hexagonal. Polaris (α-UMin) was used as the test system.

  11. ABCD matrix for apertured spherical waves.

    PubMed

    Wang, S; Bernabeu, E; Alda, J

    1991-05-01

    An ABCD matrix for describing the hard aperture under a large Fresnel number is defined in this Technical Note based on Li and Wolf's formula. It is useful for analyzing focal shifts of complicated optical systems with hard apertures. PMID:20700324

  12. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  13. Thermal emission by a subwavelength aperture

    NASA Astrophysics Data System (ADS)

    Joulain, Karl; Ezzahri, Younès; Carminati, Rémi

    2016-04-01

    We calculate, by means of fluctuational electrodynamics, the thermal emission of an aperture separating from the outside, vacuum or a material at temperature T. We show that thermal emission is very different whether the aperture size is large or small compared to the thermal wavelength. Subwavelength apertures separating vacuum from the outside have their thermal emission strongly decreased compared to classical blackbodies which have an aperture much larger than the wavelength. A simple expression of their emissivity can be calculated and their total emissive power scales as T8 instead of T4 for large apertures. Thermal emission of disk of materials with a size comparable to the wavelength is also discussed. It is shown in particular that emissivity of such a disk is increased when the material can support surface waves such as phonon polaritons.

  14. Micro Ring Grating Spectrometer with Adjustable Aperture

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Choi, Sang H. (Inventor)

    2012-01-01

    A spectrometer includes a micro-ring grating device having coaxially-aligned ring gratings for diffracting incident light onto a target focal point, a detection device for detecting light intensity, one or more actuators, and an adjustable aperture device defining a circular aperture. The aperture circumscribes a target focal point, and directs a light to the detection device. The aperture device is selectively adjustable using the actuators to select a portion of a frequency band for transmission to the detection device. A method of detecting intensity of a selected band of incident light includes directing incident light onto coaxially-aligned ring gratings of a micro-ring grating device, and diffracting the selected band onto a target focal point using the ring gratings. The method includes using an actuator to adjust an aperture device and pass a selected portion of the frequency band to a detection device for measuring the intensity of the selected portion.

  15. Variable aperture collimator for high energy radiation

    DOEpatents

    Hill, Ronald A.

    1984-05-22

    An apparatus is disclosed providing a variable aperture energy beam collimator. A plurality of beam opaque blocks are in sliding interface edge contact to form a variable aperture. The blocks may be offset at the apex angle to provide a non-equilateral aperture. A plurality of collimator block assemblies may be employed for providing a channel defining a collimated beam. Adjacent assemblies are inverted front-to-back with respect to one another for preventing noncollimated energy from emerging from the apparatus. An adjustment mechanism comprises a cable attached to at least one block and a hand wheel mechanism for operating the cable. The blocks are supported by guide rods engaging slide brackets on the blocks. The guide rods are pivotally connected at each end to intermediate actuators supported on rotatable shafts to change the shape of the aperture. A divergent collimated beam may be obtained by adjusting the apertures of adjacent stages to be unequal.

  16. Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 2: Appendix

    NASA Technical Reports Server (NTRS)

    Wiley, C. A.; Chang, M. U.

    1981-01-01

    A number of topics supporting the systems analysis of a multifrequency aperture-synthesizing microwave radiometer system are discussed. Fellgett's (multiple) advantage, interferometer mapping behavior, mapping geometry, image processing programs, and sampling errors are among the topics discussed. A FORTRAN program code is given.

  17. Smov Fos/fgs Fine Alignment (small Apertures)

    NASA Astrophysics Data System (ADS)

    Kinney, Anne

    1994-01-01

    The goal is to measure the precise aperture locations and sizes. The analysis of the observations will result in database changes to the table of aperture locations. Precise aperture locations will be determined by performing a raster step and dwell sequence in the FOS apertures along the edges of the apertures. An aperture map is required at each step of the dwell sequence. This test has to be conducted for both the RED and BLUE detectors.

  18. Smov Fos/fgs Fine Alignment (small Apertures) Revitalized

    NASA Astrophysics Data System (ADS)

    Kinney, Anne

    1994-01-01

    The goal is to measure the precise aperture locations and sizes. The analysis of the observations will result in database changes to the table of aperture locations. Precise aperture locations will be determined by performing a raster step and dwell sequence in the FOS apertures along the edges of the apertures. An aperture map is required at each step of the dwell sequence. This test has to be conducted for both the RED and BLUE detectors.

  19. Synthetic aperture methods for angular scatter imaging

    NASA Astrophysics Data System (ADS)

    Guenther, Drake A.; Ranganathan, Karthik; McAllister, Michael J.; Rigby, K. W.; Walker, William F.

    2004-04-01

    Angular scatter offers a new source of tissue contrast and an opportunity for tissue characterization in ultrasound imaging. We have previously described the application of the translating apertures algorithm (TAA) to coherently acquire angular scatter data over a range of scattering angles. While this approach works well at the focus, it suffers from poor depth of field (DOF) due to a finite aperture size. Furthermore, application of the TAA with large focused apertures entails a tradeoff between spatial resolution and scattering angle resolution. While large multielement apertures improve spatial resolution, they encompass many permutations of transmit/receive element pairs. This results in the simultaneous interrogation of multiple scattering angles, limiting angular resolution. We propose a synthetic aperture imaging scheme that achieves both high spatial resolution and high angular resolution. In backscatter acquisition mode, we transmit successively from single transducer elements, while receiving on the same element. Other scattering angles are interrogated by successively transmitting and receiving on different single elements chosen with the appropriate spatial separation between them. Thus any given image is formed using only transmit/receive element pairs at a single separation. This synthetic aperture approach minimizes averaging across scattering angles, and yields excellent angular resolution. Likewise, synthetic aperture methods allow us to build large effective apertures to maintain a high spatial resolution. Synthetic dynamic focusing and dynamic apodization are applied to further improve spatial resolution and DOF. We present simulation results and experimental results obtained using a GE Logiq 700MR system modified to obtain synthetic aperture TAA data. Images of wire targets exhibit high DOF and spatial resolution. We also present a novel approach for combining angular scatter data to effectively reduce grating lobes. With this approach we have

  20. Scalar wave diffraction from a circular aperture

    SciTech Connect

    Cerjan, C.

    1995-01-25

    The scalar wave theory is used to evaluate the expected diffraction patterns from a circular aperture. The standard far-field Kirchhoff approximation is compared to the exact result expressed in terms of oblate spheroidal harmonics. Deviations from an expanding spherical wave are calculated for circular aperture radius and the incident beam wavelength using suggested values for a recently proposed point diffractin interferometer. The Kirchhoff approximation is increasingly reliable in the far-field limit as the aperture radius is increased, although significant errors in amplitude and phase persist.

  1. Three dimensional digital holographic aperture synthesis.

    PubMed

    Crouch, Stephen; Kaylor, Brant M; Barber, Zeb W; Reibel, Randy R

    2015-09-01

    Aperture synthesis techniques are applied to temporally and spatially diverse digital holograms recorded with a fast focal-plane array. Because the technique fully resolves the downrange dimension using wide-bandwidth FMCW linear-chirp waveforms, extremely high resolution three dimensional (3D) images can be obtained even at very long standoff ranges. This allows excellent 3D image formation even when targets have significant structure or discontinuities, which are typically poorly rendered with multi-baseline synthetic aperture ladar or multi-wavelength holographic aperture ladar approaches. The background for the system is described and system performance is demonstrated through both simulation and experiments. PMID:26368474

  2. Simulating aperture masking at the Large Binocular Telescope

    NASA Astrophysics Data System (ADS)

    Stürmer, Julian; Quirrenbach, Andreas

    2012-07-01

    Preliminary investigations for an Aperture Masking Experiment at the Large Binocular Telescope (LBT) and its application to stellar surface imaging are presented. An algorithm is implemented which generates non redundant aperture masks for the LBT. These masks are adapted to the special geometrical conditions at the LBT. At the same time, they are optimized to provide a uniform UV-coverage. It is also possible to favor certain baselines to adapt the UV-coverage to observational requirements. The optimization is done by selecting appropriate masks among a large number (order 109) of randomized realizations of non-redundant (NR) masks. Using results of numerical simulations of the surface of red supergiants, interferometric data is generated as it would be available with these masks at the LBT while observing Betelgeuse. An image reconstruction algorithm is used to reconstruct images from Squared Visibility and Closure Phase data. It is shown that a number of about 15 holes per mask is sufficient to retrieve detailed images. Additionally, noise is added to the data in order to simulate the influence of measurement errors e.g. photon noise. Both the position and the shape of surface structures are hardly influenced by this noise. However, the flux of these details changes significantly.

  3. Noiseless Coding Of Magnetometer Signals

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Lee, Jun-Ji

    1989-01-01

    Report discusses application of noiseless data-compression coding to digitized readings of spaceborne magnetometers for transmission back to Earth. Objective of such coding to increase efficiency by decreasing rate of transmission without sacrificing integrity of data. Adaptive coding compresses data by factors ranging from 2 to 6.

  4. Very Large Aperture Diffractive Space Telescope

    SciTech Connect

    Hyde, Roderick Allen

    1998-04-20

    A very large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass ''aiming'' at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The magnifying glass includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the magnifying glass, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets.

  5. Shock wave absorber having apertured plate

    DOEpatents

    Shin, Yong W.; Wiedermann, Arne H.; Ockert, Carl E.

    1985-01-01

    The shock or energy absorber disclosed herein utilizes an apertured plate maintained under the normal level of liquid flowing in a piping system and disposed between the normal liquid flow path and a cavity pressurized with a compressible gas. The degree of openness (or porosity) of the plate is between 0.01 and 0.60. The energy level of a shock wave travelling down the piping system thus is dissipated by some of the liquid being jetted through the apertured plate toward the cavity. The cavity is large compared to the quantity of liquid jetted through the apertured plate, so there is little change in its volume. The porosity of the apertured plate influences the percentage of energy absorbed.

  6. Shock wave absorber having apertured plate

    DOEpatents

    Shin, Y.W.; Wiedermann, A.H.; Ockert, C.E.

    1983-08-26

    The shock or energy absorber disclosed herein utilizes an apertured plate maintained under the normal level of liquid flowing in a piping system and disposed between the normal liquid flow path and a cavity pressurized with a compressible gas. The degree of openness (or porosity) of the plate is between 0.01 and 0.60. The energy level of a shock wave travelling down the piping system thus is dissipated by some of the liquid being jetted through the apertured plate toward the cavity. The cavity is large compared to the quantity of liquid jetted through the apertured plate, so there is little change in its volume. The porosity of the apertured plate influences the percentage of energy absorbed.

  7. Synthetic Aperture Radar Missions Study Report

    NASA Technical Reports Server (NTRS)

    Bard, S.

    2000-01-01

    This report reviews the history of the LightSAR project and summarizes actions the agency can undertake to support industry-led efforts to develop an operational synthetic aperture radar (SAR) capability in the United States.

  8. Contour-Mapping Synthetic-Aperture Radar

    NASA Technical Reports Server (NTRS)

    Goldstein, R. M.; Caro, E. R.; Wu, C.

    1985-01-01

    Airborne two-antenna synthetic-aperture-radar (SAR) interferometric system provides data processed to yield terrain elevation as well as reflectedintensity information. Relative altitudes of terrain points measured to within error of approximately 25 m.

  9. Eyeglass. 1. Very large aperture diffractive telescopes.

    PubMed

    Hyde, R A

    1999-07-01

    The Eyeglass is a very large aperture (25-100-m) space telescope consisting of two distinct spacecraft, separated in space by several kilometers. A diffractive lens provides the telescope s large aperture, and a separate, much smaller, space telescope serves as its mobile eyepiece. Use of a transmissive diffractive lens solves two basic problems associated with very large aperture space telescopes; it is inherently launchable (lightweight, packagable, and deployable) it and virtually eliminates the traditional, very tight surface shape tolerances faced by reflecting apertures. The potential drawback to use of a diffractive primary (very narrow spectral bandwidth) is eliminated by corrective optics in the telescope s eyepiece; the Eyeglass can provide diffraction-limited imaging with either single-band (Deltalambda/lambda approximately 0.1), multiband, or continuous spectral coverage. PMID:18323902

  10. Large aperture ac interferometer for optical testing.

    PubMed

    Moore, D T; Murray, R; Neves, F B

    1978-12-15

    A 20-cm clear aperture modified Twyman-Green interferometer is described. The system measures phase with an AC technique called phase-lock interferometry while scanning the aperture with a dual galvanometer scanning system. Position information and phase are stored in a minicomputer with disk storage. This information is manipulated with associated software, and the wavefront deformation due to a test component is graphically displayed in perspective and contour on a CRT terminal. PMID:20208642

  11. Resonant Effects in Nanoscale Bowtie Apertures

    PubMed Central

    Ding, Li; Qin, Jin; Guo, Songpo; Liu, Tao; Kinzel, Edward; Wang, Liang

    2016-01-01

    Nanoscale bowtie aperture antennas can be used to focus light well below the diffraction limit with extremely high transmission efficiencies. This paper studies the spectral dependence of the transmission through nanoscale bowtie apertures defined in a silver film. A realistic bowtie aperture is numerically modeled using the Finite Difference Time Domain (FDTD) method. Results show that the transmission spectrum is dominated by Fabry-Pérot (F-P) waveguide modes and plasmonic modes. The F-P resonance is sensitive to the thickness of the film and the plasmonic resonant mode is closely related to the gap distance of the bowtie aperture. Both characteristics significantly affect the transmission spectrum. To verify these numerical results, bowtie apertures are FIB milled in a silver film. Experimental transmission measurements agree with simulation data. Based on this result, nanoscale bowtie apertures can be optimized to realize deep sub-wavelength confinement with high transmission efficiency with applications to nanolithography, data storage, and bio-chemical sensing. PMID:27250995

  12. Resonant Effects in Nanoscale Bowtie Apertures.

    PubMed

    Ding, Li; Qin, Jin; Guo, Songpo; Liu, Tao; Kinzel, Edward; Wang, Liang

    2016-01-01

    Nanoscale bowtie aperture antennas can be used to focus light well below the diffraction limit with extremely high transmission efficiencies. This paper studies the spectral dependence of the transmission through nanoscale bowtie apertures defined in a silver film. A realistic bowtie aperture is numerically modeled using the Finite Difference Time Domain (FDTD) method. Results show that the transmission spectrum is dominated by Fabry-Pérot (F-P) waveguide modes and plasmonic modes. The F-P resonance is sensitive to the thickness of the film and the plasmonic resonant mode is closely related to the gap distance of the bowtie aperture. Both characteristics significantly affect the transmission spectrum. To verify these numerical results, bowtie apertures are FIB milled in a silver film. Experimental transmission measurements agree with simulation data. Based on this result, nanoscale bowtie apertures can be optimized to realize deep sub-wavelength confinement with high transmission efficiency with applications to nanolithography, data storage, and bio-chemical sensing. PMID:27250995

  13. Application of a geocentrifuge and sterolithographically fabricated apertures to multiphase flow in complex fracture apertures.

    SciTech Connect

    Glenn E. McCreery; Robert D. Stedtfeld; Alan T. Stadler; Daphne L. Stoner; Paul Meakin

    2005-09-01

    A geotechnical centrifuge was used to investigate unsaturated multiphase fluid flow in synthetic fracture apertures under a variety of flow conditions. The geocentrifuge subjected the fluids to centrifugal forces allowing the Bond number to be systematically changed without adjusting the fracture aperture of the fluids. The fracture models were based on the concept that surfaces generated by the fracture of brittle geomaterials have a self-affine fractal geometry. The synthetic fracture surfaces were fabricated from a transparent epoxy photopolymer using sterolithography, and fluid flow through the transparent fracture models was monitored by an optical image acquisition system. Aperture widths were chosen to be representative of the wide range of geological fractures in the vesicular basalt that lies beneath the Idaho Nation Laboratory (INL). Transitions between different flow regimes were observed as the acceleration was changed under constant flow conditions. The experiments showed the transition between straight and meandering rivulets in smooth walled apertures (aperture width = 0.508 mm), the dependence of the rivulet width on acceleration in rough walled fracture apertures (average aperture width = 0.25 mm), unstable meandering flow in rough walled apertures at high acceleration (20g) and the narrowing of the wetted region with increasing acceleration during the penetration of water into an aperture filled with wetted particles (0.875 mm diameter glass spheres).

  14. ExoEarth Yield Estimates for a Future Large Aperture Direct Imaging Mission

    NASA Astrophysics Data System (ADS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Domagal-Goldman, Shawn; Stapelfeldt, Karl R.; Robinson, Tyler

    2015-01-01

    ExoEarth yield is a critical science metric that will constrain the required aperture of a future exoplanet-imaging mission. I will present a numerically efficient method for maximizing the yield of exoEarth candidates by simultaneously optimizing the exposure time of every star, number of visits per star, and delay time between visits, while maximally adapting the target list to the mission's capabilities. This method can potentially double the exoEarth candidate yield compared to previous methods. I will show how the yield scales with mission parameters, including aperture size and high level coronagraph parameters, and address the impact of astrophysical uncertainties on exoEarth yield.

  15. Test results of a single aperture 10 tesla dipole model magnet for the Large Hadron Collider

    SciTech Connect

    Yamamoto, Akira; Shintomi, Takakazu; Kimura, Nobuhiro

    1996-07-01

    A single aperture dipole magnet has been developed with a design magnetic field of 10 tesla by using Nb-Ti/Cu conductor to be operated at 1.8 K in pressurized super fluid helium. The magnet features double shell coil design by using high keystone Rutherford cable and compact non-magnetic steel collars to be adaptable in split/symmetric coil/collar design for twin aperture dipoles. A design central magnetic field of 10 tesla has been successfully achieved in excitation at 1.95 K in pressurized superfluid helium. Test results of the magnet with a summary of the design and fabrication will be presented.

  16. Zinc selenide-based large aperture photo-controlled deformable mirror.

    PubMed

    Quintavalla, Martino; Bonora, Stefano; Natali, Dario; Bianco, Andrea

    2016-06-01

    Realization of large aperture deformable mirrors with a large density of actuators is important in many applications, and photo-controlled deformable mirrors (PCDMs) represent an innovation. Herein we show that PCDMs are scalable realizing a 2-inch aperture device based on a polycrystalline zinc selenide (ZnSe) as the photoconductive substrate and a thin polymeric reflective membrane. ZnSe is electrically characterized and analyzed through a model that we previously introduced. The PCDM is then optically tested, demonstrating its capabilities in adaptive optics. PMID:27244417

  17. Research of aluminium alloy aerospace structure aperture measurement based on 3D digital speckle correlation method

    NASA Astrophysics Data System (ADS)

    Bai, Lu; Wang, Hongbo; Zhou, Jiangfan; Yang, Rong; Zhang, Hui

    2014-11-01

    In this paper, the aperture change of the aluminium alloy aerospace structure under real load is researched. Static experiments are carried on which is simulated the load environment of flight course. Compared with the traditional methods, through experiments results, it's proved that 3D digital speckle correlation method has good adaptability and precision on testing aperture change, and it can satisfy measurement on non-contact,real-time 3D deformation or stress concentration. The test results of new method is compared with the traditional method.

  18. Clinical coding. Code breakers.

    PubMed

    Mathieson, Steve

    2005-02-24

    --The advent of payment by results has seen the role of the clinical coder pushed to the fore in England. --Examinations for a clinical coding qualification began in 1999. In 2004, approximately 200 people took the qualification. --Trusts are attracting people to the role by offering training from scratch or through modern apprenticeships. PMID:15768716

  19. CXTFIT/Excel-A modular adaptable code for parameter estimation, sensitivity analysis and uncertainty analysis for laboratory or field tracer experiments

    NASA Astrophysics Data System (ADS)

    Tang, Guoping; Mayes, Melanie A.; Parker, Jack C.; Jardine, Philip M.

    2010-09-01

    We implemented the widely used CXTFIT code in Excel to provide flexibility and added sensitivity and uncertainty analysis functions to improve transport parameter estimation and to facilitate model discrimination for multi-tracer experiments on structured soils. Analytical solutions for one-dimensional equilibrium and nonequilibrium convection dispersion equations were coded as VBA functions so that they could be used as ordinary math functions in Excel for forward predictions. Macros with user-friendly interfaces were developed for optimization, sensitivity analysis, uncertainty analysis, error propagation, response surface calculation, and Monte Carlo analysis. As a result, any parameter with transformations (e.g., dimensionless, log-transformed, species-dependent reactions, etc.) could be estimated with uncertainty and sensitivity quantification for multiple tracer data at multiple locations and times. Prior information and observation errors could be incorporated into the weighted nonlinear least squares method with a penalty function. Users are able to change selected parameter values and view the results via embedded graphics, resulting in a flexible tool applicable to modeling transport processes and to teaching students about parameter estimation. The code was verified by comparing to a number of benchmarks with CXTFIT 2.0. It was applied to improve parameter estimation for four typical tracer experiment data sets in the literature using multi-model evaluation and comparison. Additional examples were included to illustrate the flexibilities and advantages of CXTFIT/Excel. The VBA macros were designed for general purpose and could be used for any parameter estimation/model calibration when the forward solution is implemented in Excel. A step-by-step tutorial, example Excel files and the code are provided as supplemental material.

  20. Adapting hierarchical bidirectional inter prediction on a GPU-based platform for 2D and 3D H.264 video coding

    NASA Astrophysics Data System (ADS)

    Rodríguez-Sánchez, Rafael; Martínez, José Luis; Cock, Jan De; Fernández-Escribano, Gerardo; Pieters, Bart; Sánchez, José L.; Claver, José M.; de Walle, Rik Van

    2013-12-01

    The H.264/AVC video coding standard introduces some improved tools in order to increase compression efficiency. Moreover, the multi-view extension of H.264/AVC, called H.264/MVC, adopts many of them. Among the new features, variable block-size motion estimation is one which contributes to high coding efficiency. Furthermore, it defines a different prediction structure that includes hierarchical bidirectional pictures, outperforming traditional Group of Pictures patterns in both scenarios: single-view and multi-view. However, these video coding techniques have high computational complexity. Several techniques have been proposed in the literature over the last few years which are aimed at accelerating the inter prediction process, but there are no works focusing on bidirectional prediction or hierarchical prediction. In this article, with the emergence of many-core processors or accelerators, a step forward is taken towards an implementation of an H.264/AVC and H.264/MVC inter prediction algorithm on a graphics processing unit. The results show a negligible rate distortion drop with a time reduction of up to 98% for the complete H.264/AVC encoder.

  1. Aperture effects in squid jet propulsion.

    PubMed

    Staaf, Danna J; Gilly, William F; Denny, Mark W

    2014-05-01

    Squid are the largest jet propellers in nature as adults, but as paralarvae they are some of the smallest, faced with the inherent inefficiency of jet propulsion at a low Reynolds number. In this study we describe the behavior and kinematics of locomotion in 1 mm paralarvae of Dosidicus gigas, the smallest squid yet studied. They swim with hop-and-sink behavior and can engage in fast jets by reducing the size of the mantle aperture during the contraction phase of a jetting cycle. We go on to explore the general effects of a variable mantle and funnel aperture in a theoretical model of jet propulsion scaled from the smallest (1 mm mantle length) to the largest (3 m) squid. Aperture reduction during mantle contraction increases propulsive efficiency at all squid sizes, although 1 mm squid still suffer from low efficiency (20%) because of a limited speed of contraction. Efficiency increases to a peak of 40% for 1 cm squid, then slowly declines. Squid larger than 6 cm must either reduce contraction speed or increase aperture size to maintain stress within maximal muscle tolerance. Ecological pressure to maintain maximum velocity may lead them to increase aperture size, which reduces efficiency. This effect might be ameliorated by nonaxial flow during the refill phase of the cycle. Our model's predictions highlight areas for future empirical work, and emphasize the existence of complex behavioral options for maximizing efficiency at both very small and large sizes. PMID:24501132

  2. Bi-metal coated aperture SNOM probes

    NASA Astrophysics Data System (ADS)

    Antosiewicz, Tomasz J.; Wróbel, Piotr; Szoplik, Tomasz

    2011-05-01

    Aperture probes of scanning near-field optical microscopes (SNOM) offer resolution which is limited by a sum of the aperture diameter at the tip of a tapered waveguide probe and twice the skin depth in metal used for coating. An increase of resolution requires a decrease of the aperture diameter. However, due to low energy throughput of such probes aperture diameters usually are larger than 50 nm. A groove structure at fiber core-metal coating interface for photon-to-plasmon conversion enhances the energy throughput 5-fold for Al coated probes and 30-fold for Au coated probes due to lower losses in the metal. However, gold coated probes have lower resolution, first due to light coupling from the core to plasmons at the outside of the metal coating, and second due to the skin depth being larger than for Al. Here we report on the impact of a metal bilayer of constant thickness for coating aperture SNOM probes. The purpose of the bilayer of two metals of which the outer one is aluminum and the inner is a noble metal is to assure low losses, hence larger transmission. Using body-of-revolution finite-difference time-domain simulations we analyze properties of probes without corrugations to measure the impact of using a metal bilayer and choose an optimum bi-metal configuration. Additionally we investigate how this type of metalization works in the case of grooved probes.

  3. Testing the large aperture optical components by the sub-aperture stitching interferometer

    NASA Astrophysics Data System (ADS)

    He, Yong; Wang, Zhao-xuan; Wang, Qing; Ji, Bo

    2008-03-01

    Nowadays many large aperture optical components are widely used in the high-tech area, how to test them become more and more important. Here describes a new method to test the large aperture optical components using the small aperture interferometer, deduce how to get the aperture number and the concrete process of the stitching parameter in a systematic way, finally get the best plan to choose the sub-aperture of the square and circular optical plane. To specify the stability of the method we operate an experiment, the result shows that the stitching accuracy can reach λ/10, it meet the need of the inertia constraint fusion etc, that is good enough to be used in the high-tech area.

  4. Task 3: PNNL Visit by JAEA Researchers to Participate in TODAM Code Applications to Fukushima Rivers and to Evaluate the Feasibility of Adaptation of FLESCOT Code to Simulate Radionuclide Transport in the Pacific Ocean Coastal Water Around Fukushima

    SciTech Connect

    Onishi, Yasuo

    2013-03-29

    Four JAEA researchers visited PNNL for two weeks in February, 2013 to learn the PNNL-developed, unsteady, one-dimensional, river model, TODAM and the PNNL-developed, time-dependent, three dimensional, coastal water model, FLESCOT. These codes predict sediment and contaminant concentrations by accounting sediment-radionuclide interactions, e.g., adsorption/desorption and transport-deposition-resuspension of sediment-sorbed radionuclides. The objective of the river and coastal water modeling is to simulate • 134Cs and 137Cs migration in Fukushima rivers and the coastal water, and • their accumulation in the river and ocean bed along the Fukushima coast. Forecasting the future cesium behavior in the river and coastal water under various scenarios would enable JAEA to assess the effectiveness of various on-land remediation activities and if required, possible river and coastal water clean-up operations to reduce the contamination of the river and coastal water, agricultural products, fish and other aquatic biota. PNNL presented the following during the JAEA visit to PNNL: • TODAM and FLESCOT’s theories and mathematical formulations • TODAM and FLESCOT model structures • Past TODAM and FLESCOT applications • Demonstrating these two codes' capabilities by applying them to simple hypothetical river and coastal water cases. • Initial application of TODAM to the Ukedo River in Fukushima and JAEA researchers' participation in its modeling. PNNL also presented the relevant topics relevant to Fukushima environmental assessment and remediation, including • PNNL molecular modeling and EMSL computer facilities • Cesium adsorption/desorption characteristics • Experiences of connecting molecular science research results to macro model applications to the environment • EMSL tour • Hanford Site road tour. PNNL and JAEA also developed future course of actions for joint research projects on the Fukushima environmental and remediation assessments.

  5. Approaching real-time terahertz imaging using photo-induced reconfigurable aperture arrays

    NASA Astrophysics Data System (ADS)

    Shams, Md. Itrat Bin; Jiang, Zhenguo; Rahman, Syed; Qayyum, Jubaid; Hesler, Jeffrey L.; Cheng, Li-Jing; Xing, Huili Grace; Fay, Patrick; Liu, Lei

    2014-05-01

    We report a technique using photo-induced coded-aperture arrays for potential real-time THz imaging at roomtemperature. The coded apertures (based on Hadamard coding) were implemented using programmable illumination on semi-insulating Silicon wafer by a commercial digital-light processing (DLP) projector. Initial imaging experiments were performed in the 500-750 GHz band using a WR-1.5 vector network analyzer (VNA) as the source and receiver. Over the entire band, each array pixel can be optically turned on and off with an average modulation depth of ~20 dB and ~35 dB, for ~4 cm2 and ~0.5 cm2 imaging areas respectively. The modulation speed is ~1.3 kHz using the current DLP system and data acquisition software. Prototype imaging demonstrations have shown that a 256-pixel image can be obtained in the order of 10 seconds using compressed sensing (CS), and this speed can be improved greatly for potential real-time or video-rate THz imaging. This photo-induced coded-aperture imaging (PI-CAI) technique has been successfully applied to characterize THz beams in quasi-optical systems and THz horn antennas.

  6. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  7. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  8. Comparison of binocular through-focus visual acuity with monovision and a small aperture inlay.

    PubMed

    Schwarz, Christina; Manzanera, Silvestre; Prieto, Pedro M; Fernández, Enrique J; Artal, Pablo

    2014-10-01

    Corneal small aperture inlays provide extended depth of focus as a solution to presbyopia. As this procedure is becoming more popular, it is interesting to compare its performance with traditional approaches, such as monovision. Here, binocular visual acuity was measured as a function of object vergence in three subjects by using a binocular adaptive optics vision analyzer. Visual acuity was measured at two luminance levels (photopic and mesopic) under several optical conditions: 1) natural vision (4 mm pupils, best corrected distance vision), 2) pure-defocus monovision ( + 1.25 D add in the nondominant eye), 3) small aperture monovision (1.6 mm pupil in the nondominant eye), and 4) combined small aperture and defocus monovision (1.6 mm pupil and a + 0.75 D add in the nondominant eye). Visual simulations of a small aperture corneal inlay suggest that the device extends DOF as effectively as traditional monovision in photopic light, in both cases at the cost of binocular summation. However, individual factors, such as aperture centration or sensitivity to mesopic conditions should be considered to assure adequate visual outcomes. PMID:25360355

  9. Optical design through optimization using freeform orthogonal polynomials for rectangular apertures

    NASA Astrophysics Data System (ADS)

    Nikolic, Milena; Benítez, P.; Miñano, Juan C.; Grabovickic, D.; Liu, Jiayao; Narasimhan, B.; Buljan, M.

    2015-09-01

    With the increasing interest in using freeform surfaces in optical systems due to the novel application opportunities and manufacturing techniques, new challenges are constantly emerging. Optical systems have traditionally been using circular apertures, but new types of freeform systems call for different aperture shapes. First non-circular aperture shape that one can be interested in due to tessellation or various folds systems is the rectangular one. This paper covers the comparative analysis of a simple local optimization of one design example using different orthogonalized representations of our freeform surface for the rectangular aperture. A very simple single surface off-axis mirror is chosen as a starting system. The surface is fitted to the desired polynomial representation, and the whole system is then optimized with the only constraint being the effective focal length. The process is repeated for different surface representations, amongst which there are some defined inside a circle, like Forbes freeform polynomials, and others that can be defined inside a rectangle like a new calculated Legendre type polynomials orthogonal in the gradient. It can be observed that with this new calculated polynomial type there is a faster convergence to a deeper minimum compared to "defined inside a circle" polynomials. The average MTF values across 17 field points also show clear benefits in using the polynomials that adapted more accurately to the aperture used in the system.

  10. Comparison of binocular through-focus visual acuity with monovision and a small aperture inlay

    PubMed Central

    Schwarz, Christina; Manzanera, Silvestre; Prieto, Pedro M.; Fernández, Enrique J.; Artal, Pablo

    2014-01-01

    Corneal small aperture inlays provide extended depth of focus as a solution to presbyopia. As this procedure is becoming more popular, it is interesting to compare its performance with traditional approaches, such as monovision. Here, binocular visual acuity was measured as a function of object vergence in three subjects by using a binocular adaptive optics vision analyzer. Visual acuity was measured at two luminance levels (photopic and mesopic) under several optical conditions: 1) natural vision (4 mm pupils, best corrected distance vision), 2) pure-defocus monovision ( + 1.25 D add in the nondominant eye), 3) small aperture monovision (1.6 mm pupil in the nondominant eye), and 4) combined small aperture and defocus monovision (1.6 mm pupil and a + 0.75 D add in the nondominant eye). Visual simulations of a small aperture corneal inlay suggest that the device extends DOF as effectively as traditional monovision in photopic light, in both cases at the cost of binocular summation. However, individual factors, such as aperture centration or sensitivity to mesopic conditions should be considered to assure adequate visual outcomes. PMID:25360355

  11. The aperture problem in contoured stimuli

    PubMed Central

    Kane, David; Bex, Peter J.; Dakin, Steven C.

    2010-01-01

    A moving object elicits responses from V1 neurons tuned to a broad range of locations, directions, and spatiotemporal frequencies. Global pooling of such signals can overcome their intrinsic ambiguity in relation to the object’s direction/speed (the “aperture problem”); here we examine the role of low-spatial frequencies (SF) and second-order statistics in this process. Subjects made a 2AFC fine direction-discrimination judgement of ‘naturally’ contoured stimuli viewed rigidly translating behind a series of small circular apertures. This configuration allowed us to manipulate the scene in several ways; by randomly switching which portion of the stimulus was presented behind each aperture or by occluding certain spatial frequency bands. We report that global motion integration is (a) largely insensitive to the second-order statistics of such stimuli and (b) is rigidly broadband even in the presence of a disrupted low SF component. PMID:19810794

  12. Solar energy apparatus with apertured shield

    NASA Technical Reports Server (NTRS)

    Collings, Roger J. (Inventor); Bannon, David G. (Inventor)

    1989-01-01

    A protective apertured shield for use about an inlet to a solar apparatus which includesd a cavity receiver for absorbing concentrated solar energy. A rigid support truss assembly is fixed to the periphery of the inlet and projects radially inwardly therefrom to define a generally central aperture area through which solar radiation can pass into the cavity receiver. A non-structural, laminated blanket is spread over the rigid support truss in such a manner as to define an outer surface area and an inner surface area diverging radially outwardly from the central aperture area toward the periphery of the inlet. The outer surface area faces away from the inlet and the inner surface area faces toward the cavity receiver. The laminated blanket includes at least one layer of material, such as ceramic fiber fabric, having high infra-red emittance and low solar absorption properties, and another layer, such as metallic foil, of low infra-red emittance properties.

  13. Aperture shape optimization for IMRT treatment planning

    NASA Astrophysics Data System (ADS)

    Cassioli, A.; Unkelbach, J.

    2013-01-01

    We propose an algorithm for aperture shape optimization (ASO) for step and shoot delivery of intensity-modulated radiotherapy. The method is an approach to direct aperture optimization (DAO) that exploits gradient information to locally optimize the positions of the leafs of a multileaf collimator. Based on the dose-influence matrix, the dose distribution is locally approximated as a linear function of the leaf positions. Since this approximation is valid only in a small interval around the current leaf positions, we use a trust-region-like method to optimize the leaf positions: in one iteration, the leaf motion is confined to the beamlets where the leaf edges are currently positioned. This yields a well-behaved optimization problem for the leaf positions and the aperture weights, which can be solved efficiently. If, in one iteration, a leaf is moved to the edge of a beamlet, the leaf motion can be confined to the neighboring beamlet in the next iteration. This allows for large leaf position changes over the course of the algorithm. In this paper, the ASO algorithm is embedded into a column-generation approach to DAO. After a new aperture is added to the treatment plan, we use the ASO algorithm to simultaneously optimize aperture weights and leaf positions for the new set of apertures. We present results for a paraspinal tumor case, a prostate case and a head and neck case. The computational results indicate that, using this approach, treatment plans close to the ideal fluence map optimization solution can be obtained.

  14. Synthetic aperture radar capabilities in development

    SciTech Connect

    Miller, M.

    1994-11-15

    The Imaging and Detection Program (IDP) within the Laser Program is currently developing an X-band Synthetic Aperture Radar (SAR) to support the Joint US/UK Radar Ocean Imaging Program. The radar system will be mounted in the program`s Airborne Experimental Test-Bed (AETB), where the initial mission is to image ocean surfaces and better understand the physics of low grazing angle backscatter. The Synthetic Aperture Radar presentation will discuss its overall functionality and a brief discussion on the AETB`s capabilities. Vital subsystems including radar, computer, navigation, antenna stabilization, and SAR focusing algorithms will be examined in more detail.

  15. PDII- Additional discussion of the dynamic aperture

    SciTech Connect

    Norman M. Gelfand

    2002-07-23

    This note is in the nature of an addition to the dynamic aperture calculations found in the report on the Proton Driver, FERMILAB-TM-2169. A extensive discussion of the Proton Driver lattice, as well as the nomenclature used to describe it can be found in TM-2169. Basically the proposed lattice is a racetrack design with the two arcs joined by two long straight sections. The straight sections are dispersion free. Tracking studies were undertaken with the objective of computing the dynamic aperture for the lattice and some of the results have been incorporated into TM-2169. This note is a more extensive report of those calculations.

  16. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  17. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  18. Processing for spaceborne synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Lybanon, M.

    1973-01-01

    The data handling and processing in using synthetic aperture radar as a satellite-borne earth resources remote sensor is considered. The discussion covers the nature of the problem, the theory, both conventional and potential advanced processing techniques, and a complete computer simulation. It is shown that digital processing is a real possibility and suggests some future directions for research.

  19. Multiple aperture seeker design for endoatmospheric intercepts

    NASA Astrophysics Data System (ADS)

    Werner, Jennifer; Shui, Ven; Reeves, Barry

    1993-06-01

    An IR optical seeker concept developed by Loral Infrared and Imaging Systems to meet the endoatmospheric requirements for hit-to-kill intercepts is presented. The seeker works in conjunction with an uncooled window concept developed by Textron Defense Systems. The combination of the compact seeker design with an uncooled window aperture provides an adequate solution with minimal complexity.

  20. Interferometric Synthetic Aperture Microwave Radiometers : an Overview

    NASA Technical Reports Server (NTRS)

    Colliander, Andreas; McKague, Darren

    2011-01-01

    This paper describes 1) the progress of the work of the IEEE Geoscience and Remote Sensing Society (GRSS) Instrumentation and Future Technologies Technical Committee (IFT-TC) Microwave Radiometer Working Group and 2) an overview of the development of interferometric synthetic aperture microwave radiometers as an introduction to a dedicated session.

  1. Multispectral Dual-Aperture Schmidt Objective

    NASA Technical Reports Server (NTRS)

    Minott, P. O.

    1983-01-01

    Off-axis focal planes make room for beam splitters. System includes two off-axis primary spherical reflectors, each concentric with refractive corrector at aperature. Off-axis design assures large aperture required for adequate spatial resolution. Separate images have precise registration, used for multispectral resource mapping or remote sensing.

  2. Partially redundant apertures for infrared stellar imaging

    NASA Astrophysics Data System (ADS)

    Aitken, G. J. M.; Corteggiani, J. P.; Gay, J.

    1981-06-01

    Spectral-bandwidth constraints to ensure controlled amounts of redundancy are established for a class of two-dimensional partially redundant arrays (PRA's). In the IR, where speckle statistics are poor, the telescope-atmosphere modulation transfer function is determined solely by the PRA geometry. Signal-to-noise-ratio estimates, an optimum aperture criterion, and a six-element PRA example are presented.

  3. Agile multiple aperture imager receiver development

    NASA Astrophysics Data System (ADS)

    Lees, David E. B.; Dillon, Robert F.

    1990-02-01

    A variety of unconventional imaging schemes have been investigated in recent years that rely on small, unphased optical apertures (subaperture) to measure properties of an incoming optical wavefront and recover images of distant objects without using precisely figured, large aperture optical elements. Such schemes offer several attractive features. They provide the potential to create very lare effective aperture that are expandable over time and can be launched into space in small pieces. Since the subapertures are identical in construction, they may be mass producible at potentially low cost. A preliminary design for a practical low cost optical receiver is presented. The multiple aperture design has high sensitivity, wide field-of-view, and is lightweight. A combination of spectral, temporal, and spatial background suppression are used to achieve daytime operation at low signal levels. Modular packaging to make the number of receiver subapertures conveniently scalable is also presented. The design is appropriate to a ground-base proof-of-concept experiment for long range active speckle imaging.

  4. Radiation safety considerations in proton aperture disposal.

    PubMed

    Walker, Priscilla K; Edwards, Andrew C; Das, Indra J; Johnstone, Peter A S

    2014-04-01

    Beam shaping in scattered and uniform scanned proton beam therapy (PBT) is made commonly by brass apertures. Due to proton interactions, these devices become radioactive and could pose safety issues and radiation hazards. Nearly 2,000 patient-specific devices per year are used at Indiana University Cyclotron Operations (IUCO) and IU Health Proton Therapy Center (IUHPTC); these devices require proper guidelines for disposal. IUCO practice has been to store these apertures for at least 4 mo to allow for safe transfer to recycling contractors. The devices require decay in two staged secure locations, including at least 4 mo in a separate building, at which point half are ready for disposal. At 6 mo, 20-30% of apertures require further storage. This process requires significant space and manpower and should be considered in the design process for new clinical facilities. More widespread adoption of pencil beam or spot scanning nozzles may obviate this issue, as apertures then will no longer be necessary. PMID:24562073

  5. Vowel Aperture and Syllable Segmentation in French

    ERIC Educational Resources Information Center

    Goslin, Jeremy; Frauenfelder, Ulrich H.

    2008-01-01

    The theories of Pulgram (1970) suggest that if the vowel of a French syllable is open then it will induce syllable segmentation responses that result in the syllable being closed, and vice versa. After the empirical verification that our target French-speaking population was capable of distinguishing between mid-vowel aperture, we examined the…

  6. Aperture synthesis imaging from the moon

    NASA Technical Reports Server (NTRS)

    Burns, Jack O.

    1991-01-01

    Four candidate imaging aperture synthesis concepts are described for possible emplacement on the moon beginning in the next decade. These include an optical interferometer with 10 microarcsec resolution, a submillimeter array with 6 milliarcsec resolution, a moon-earth VLBI experiment, and a very low frequency interferometer in lunar orbit.

  7. Clutter free synthetic aperture radar correlator

    NASA Technical Reports Server (NTRS)

    Jain, A.

    1977-01-01

    A synthetic aperture radar correlation system including a moving diffuser located at the image plane of a radar processor is described. The output of the moving diffuser is supplied to a lens whose impulse response is at least as wide as that of the overall processing system. A significant reduction in clutter results is given.

  8. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  9. RF Performance of Membrane Aperture Shells

    NASA Technical Reports Server (NTRS)

    Flint, Eirc M.; Lindler, Jason E.; Thomas, David L.; Romanofsky, Robert

    2007-01-01

    This paper provides an overview of recent results establishing the suitability of Membrane Aperture Shell Technology (MAST) for Radio Frequency (RF) applications. These single surface shells are capable of maintaining their figure with no preload or pressurization and minimal boundary support, yet can be compactly roll stowed and passively self deploy. As such, they are a promising technology for enabling a future generation of RF apertures. In this paper, we review recent experimental and numerical results quantifying suitable RF performance. It is shown that candidate materials possess metallic coatings with sufficiently low surface roughness and that these materials can be efficiently fabricated into RF relevant doubly curved shapes. A numerical justification for using a reflectivity metric, as opposed to the more standard RF designer metric of skin depth, is presented and the resulting ability to use relatively thin coating thickness is experimentally validated with material sample tests. The validity of these independent film sample measurements are then confirmed through experimental results measuring RF performance for reasonable sized doubly curved apertures. Currently available best results are 22 dBi gain at 3 GHz (S-Band) for a 0.5m aperture tested in prime focus mode, 28dBi gain for the same antenna in the C-Band (4 to 6 GHz), and 36.8dBi for a smaller 0.25m antenna tested at 32 GHz in the Ka-Band. RF range test results for a segmented aperture (one possible scaling approach) are shown as well. Measured antenna system actual efficiencies (relative to the unachievable) ideal for these on axis tests are generally quite good, typically ranging from 50 to 90%.

  10. Coded source neutron imaging with a MURA mask

    NASA Astrophysics Data System (ADS)

    Zou, Y. B.; Schillinger, B.; Wang, S.; Zhang, X. S.; Guo, Z. Y.; Lu, Y. R.

    2011-09-01

    In coded source neutron imaging the single aperture commonly used in neutron radiography is replaced with a coded mask. Using a coded source can improve the neutron flux at the sample plane when a very high L/ D ratio is needed. The coded source imaging is a possible way to reduce the exposure time to get a neutron image with very high L/ D ratio. A 17×17 modified uniformly redundant array coded source was tested in this work. There are 144 holes of 0.8 mm diameter on the coded source. The neutron flux from the coded source is as high as from a single 9.6 mm aperture, while its effective L/ D is the same as in the case of a 0.8 mm aperture. The Richardson-Lucy maximum likelihood algorithm was used for image reconstruction. Compared to an in-line phase contrast neutron image taken with a 1 mm aperture, it takes much less time for the coded source to get an image of similar quality.

  11. Performance results for Beamlet: A large aperture multipass Nd glass laser

    SciTech Connect

    Campbell, J.H.; Barker, C.E.; VanWonterghem, B.M.; Speck, D.R.; Behrendt, W.C.; Murray, J.R.; Caird, J.A.; Decker, D.E.; Smith, I.C.

    1995-04-11

    The Beamlet laser is a large aperture, flashlamp pumped Nd: glass laser that is a scientific prototype of an advanced Inertial Fusion laser. Beamlet has achieved third harmonic, conversion efficiency of near 80% with its nominal 35cm {times} 35cm square beam at mean 3{omega} fluences in excess of 8 J/cm{sup 2}(3-ns). Beamlet uses an adaptive optics system to correct for aberrations and achieve less than 2 {times} diffraction limited far field spot size.

  12. Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement

    SciTech Connect

    Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.

    2009-09-29

    This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).

  13. Modeling AXAF Obstructions with the Generalized Aperture Program.

    NASA Astrophysics Data System (ADS)

    Nguyen, D.; Gaetz, T.; Jerius, D.; Stern, I.

    The generalized aperture program is designed to simulate the effects on the incident photon stream of physical obstructions, such as thermal baffles and pre- and post-collimators. It can handle a wide variety of aperture shapes, and has provisions to allow alterations of the photons by the apertures. The philosophy behind the aperture program is that a geometrically complicated aperture may be modeled by a combination of geometrically simpler apertures. This is done by incorporating a language, lua, to lay out the apertures. User provided call-back functions enable the modeling of the interactions of the incident photon with the apertures. This approach allows for maximum flexibility, since the geometry and interactions of obstructions can be specified by the user at run time.

  14. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  15. Method of forming aperture plate for electron microscope

    NASA Technical Reports Server (NTRS)

    Heinemann, K. (Inventor)

    1974-01-01

    An electron microscope is described with an electron source a condenser lens having either a circular aperture for focusing a solid cone of electrons onto a specimen or an annular aperture for focusing a hollow cone of electrons onto the specimen. It also has objective lens with an annular objective aperture, for focusing electrons passing through the specimen onto an image plane. A method of making the annular objective aperture using electron imaging, electrolytic deposition and ion etching techniques is included.

  16. Bar-Code-Scribing Tool

    NASA Technical Reports Server (NTRS)

    Badinger, Michael A.; Drouant, George J.

    1991-01-01

    Proposed hand-held tool applies indelible bar code to small parts. Possible to identify parts for management of inventory without tags or labels. Microprocessor supplies bar-code data to impact-printer-like device. Device drives replaceable scribe, which cuts bar code on surface of part. Used to mark serially controlled parts for military and aerospace equipment. Also adapts for discrete marking of bulk items used in food and pharmaceutical processing.

  17. Vacuum aperture isolator for retroreflection from laser-irradiated target

    DOEpatents

    Benjamin, Robert F.; Mitchell, Kenneth B.

    1980-01-01

    The disclosure is directed to a vacuum aperture isolator for retroreflection of a laser-irradiated target. Within a vacuum chamber are disposed a beam focusing element, a disc having an aperture and a recollimating element. The edge of the focused beam impinges on the edge of the aperture to produce a plasma which refracts any retroreflected light from the laser's target.

  18. Dual aperture dipole magnet with second harmonic component

    DOEpatents

    Praeg, W.F.

    1983-08-31

    An improved dual aperture dipole electromagnet includes a second-harmonic frequency magnetic guide field winding which surrounds first harmonic frequency magnetic guide field windings associated with each aperture. The second harmonic winding and the first harmonic windings cooperate to produce resultant magnetic waveforms in the apertures which have extended acceleration and shortened reset portions of electromagnet operation.

  19. Dual aperture dipole magnet with second harmonic component

    DOEpatents

    Praeg, Walter F.

    1985-01-01

    An improved dual aperture dipole electromagnet includes a second-harmonic frequency magnetic guide field winding which surrounds first harmonic frequency magnetic guide field windings associated with each aperture. The second harmonic winding and the first harmonic windings cooperate to produce resultant magnetic waveforms in the apertures which have extended acceleration and shortened reset portions of electromagnet operation.

  20. Heuristic dynamic complexity coding

    NASA Astrophysics Data System (ADS)

    Škorupa, Jozef; Slowack, Jürgen; Mys, Stefaan; Lambert, Peter; Van de Walle, Rik

    2008-04-01

    Distributed video coding is a new video coding paradigm that shifts the computational intensive motion estimation from encoder to decoder. This results in a lightweight encoder and a complex decoder, as opposed to the predictive video coding scheme (e.g., MPEG-X and H.26X) with a complex encoder and a lightweight decoder. Both schemas, however, do not have the ability to adapt to varying complexity constraints imposed by encoder and decoder, which is an essential ability for applications targeting a wide range of devices with different complexity constraints or applications with temporary variable complexity constraints. Moreover, the effect of complexity adaptation on the overall compression performance is of great importance and has not yet been investigated. To address this need, we have developed a video coding system with the possibility to adapt itself to complexity constraints by dynamically sharing the motion estimation computations between both components. On this system we have studied the effect of the complexity distribution on the compression performance. This paper describes how motion estimation can be shared using heuristic dynamic complexity and how distribution of complexity affects the overall compression performance of the system. The results show that the complexity can indeed be shared between encoder and decoder in an efficient way at acceptable rate-distortion performance.

  1. The saga of the LEP dynamic aperture

    NASA Astrophysics Data System (ADS)

    Verdier, A.

    1999-05-01

    The large electron-positron collider LEP at CERN provides a beautiful example of our conceptual limits concerning the problem of dynamic aperture (short term stability of the transverse oscillations of particle trajectories) in circular machines. For the operation at 45 GeV (Z0 peak) the dynamic aperture did not pose any problem although its measurement gave a value much smaller than that predicted up to the end of 1993. After this date the measurements gave the same result as predicted but it was not possible to trace back the origin of the discrepancy. At high energy (the maximum operating energy foreseen is 100 GeV) the beam emittance increases with the square of the beam energy. Therefore, low emittance optics were proposed. These optics suffer from large anharmonicities because of the increased sextupole strengths. This led to an unexpected limitation of the beam lifetime.

  2. Miniature synthetic-aperture radar system

    NASA Astrophysics Data System (ADS)

    Stockton, Wayne; Stromfors, Richard D.

    1990-11-01

    Loral Defense Systems-Arizona has developed a high-performance synthetic-aperture radar (SAR) for small aircraft and unmanned aerial vehicle (UAV) reconnaissance applications. This miniature radar, called Miniature Synthetic-Aperture Radar (MSAR), is packaged in a small volume and has low weight. It retains key features of large SAR systems, including high-resolution imaging and all-weather operation. The operating frequency of MSAR can optionally be selected to provide foliage penetration capability. Many imaging radar configurations can be derived using this baseline system. MSAR with a data link provides an attractive UAV sensor. MSAR with a real-time image formation processor is well suited to installations where onboard processing and immediate image analysis are required. The MSAR system provides high-resolution imaging for short-to-medium range reconnaissance applications.

  3. Polarization-sensitive interferometric synthetic aperture microscopy

    NASA Astrophysics Data System (ADS)

    South, Fredrick A.; Liu, Yuan-Zhi; Xu, Yang; Shemonski, Nathan D.; Carney, P. Scott; Boppart, Stephen A.

    2015-11-01

    Three-dimensional optical microscopy suffers from the well-known compromise between transverse resolution and depth-of-field. This is true for both structural imaging methods and their functional extensions. Interferometric synthetic aperture microscopy (ISAM) is a solution to the 3D coherent microscopy inverse problem that provides depth-independent transverse resolution. We demonstrate the extension of ISAM to polarization sensitive imaging, termed polarization-sensitive interferometric synthetic aperture microscopy (PS-ISAM). This technique is the first functionalization of the ISAM method and provides improved depth-of-field for polarization-sensitive imaging. The basic assumptions of polarization-sensitive imaging are explored, and refocusing of birefringent structures is experimentally demonstrated. PS-ISAM enables high-resolution volumetric imaging of birefringent materials and tissue.

  4. A large aperture electro-optic deflector

    NASA Astrophysics Data System (ADS)

    Bosco, A.; Boogert, S. T.; Boorman, G. E.; Blair, G. A.

    2009-05-01

    An electro-optic laser beam deflector with a clear optical aperture of 8.6 mm has been designed, realized, and tested. The electro-optic material used to implement the device was a MgO:LiNbO3 crystal. The exceptionally large aperture makes the device suitable for applications where fast scanning of high power laser beams is needed. The measured deflection angle was 120 μrad/kV for a total length of electro-optic material of 90 mm. A mode quality analysis of the laser beam revealed that the M2 of the laser is affected by less than 4% during scan operation when maximum driving voltage is applied.

  5. Design of large aperture focal plane shutter

    NASA Astrophysics Data System (ADS)

    Hu, Jia-wen; Ma, Wen-li; Huang, Jin-long

    2012-09-01

    To satisfy the requirement of large telescope, a large aperture focal plane shutter with aperture size of φ200mm was researched and designed to realize, which could be started and stopped in a relative short time with precise position, and also the blades could open and close at the same time at any orientation. Timing-belts and stepper motors were adopted as the drive mechanism. Velocity and position of the stepper motors were controlled by the PWM pulse generated by DSP. Exponential curve is applied to control the velocity of the stepper motors to make the shutter start and stop in a short time. The closing/open time of shutter is 0.2s, which meets the performance requirements of large telescope properly.

  6. Performance limits for Synthetic Aperture Radar.

    SciTech Connect

    Doerry, Armin Walter

    2006-02-01

    The performance of a Synthetic Aperture Radar (SAR) system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to ''get your arms around'' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics, no matter how bright the engineer tasked to generate a system design. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall SAR system. For example, there are definite optimum frequency bands that depend on weather conditions and range, and minimum radar PRF for a fixed real antenna aperture dimension is independent of frequency. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the ''seek time''.

  7. Frequency scanning from subwavelength aperture array.

    PubMed

    Yang, Rui; Zhang, Jiawei; Wang, Hui

    2014-06-15

    Resonant transmission of microwaves is demonstrated through subwavelength holes on a semicircular radiator. Split ring resonators, offering a perfect control of the emitting apertures, are applied to determine the radiation direction and the resonant frequency. Full wave simulation shows that our design is capable of achieving wide angular scanning beams without causing any other main lobe, and the steerable beams could be easily controlled through tuning the excitation frequency. PMID:24978511

  8. Analytic inversion in synthetic aperture radar.

    PubMed Central

    Rothaus, O S

    1994-01-01

    A method of processing synthetic aperture radar signals that avoids some of the approximations currently in use that appear to be responsible for severe phase distortions is described. As a practical matter, this method requires N3 numerical operations, as opposed to the N2 ln N currently the case, but N3 is now easily managed, for N in the range of interest. PMID:11607485

  9. Real-time interferometric synthetic aperture microscopy.

    PubMed

    Ralston, Tyler S; Marks, Daniel L; Carney, P Scott; Boppart, Stephen A

    2008-02-18

    An interferometric synthetic aperture microscopy (ISAM) system design with real-time 2D cross-sectional processing is described in detail. The system can acquire, process, and display the ISAM reconstructed images at frame rates of 2.25 frames per second for 512 X 1024 pixel images. This system provides quantitatively meaningful structural information from previously indistinguishable scattering intensities and provides proof of feasibility for future real-time ISAM systems. PMID:18542337

  10. Variable-Aperture Reciprocating Reed Valve

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L. (Inventor); Myers, W. Neill (Inventor); Kelley, Anthony R. (Inventor); Yang, Hong Q. (Inventor)

    2015-01-01

    A variable-aperture reciprocating reed valve includes a valve body defining a through hole region having a contoured-profile portion. A semi-rigid plate is affixed on one side thereof to the valve body to define a cantilever extending across the through hole region. At least one free edge of the cantilever opposes the contoured-profile portion of the through hole region in a non-contact relationship.

  11. Feasibility of Swept Synthetic Aperture Ultrasound Imaging.

    PubMed

    Bottenus, Nick; Long, Will; Zhang, Haichong K; Jakovljevic, Marko; Bradway, David P; Boctor, Emad M; Trahey, Gregg E

    2016-07-01

    Ultrasound image quality is often inherently limited by the physical dimensions of the imaging transducer. We hypothesize that, by collecting synthetic aperture data sets over a range of aperture positions while precisely tracking the position and orientation of the transducer, we can synthesize large effective apertures to produce images with improved resolution and target detectability. We analyze the two largest limiting factors for coherent signal summation: aberration and mechanical uncertainty. Using an excised canine abdominal wall as a model phase screen, we experimentally observed an effective arrival time error ranging from 18.3 ns to 58 ns (root-mean-square error) across the swept positions. Through this clutter-generating tissue, we observed a 72.9% improvement in resolution with only a 3.75 dB increase in side lobe amplitude compared to the control case. We present a simulation model to study the effect of calibration and mechanical jitter errors on the synthesized point spread function. The relative effects of these errors in each imaging dimension are explored, showing the importance of orientation relative to the point spread function. We present a prototype device for performing swept synthetic aperture imaging using a conventional 1-D array transducer and ultrasound research scanner. Point target reconstruction error for a 44.2 degree sweep shows a reconstruction precision of 82.8 μm and 17.8 μm in the lateral and axial dimensions respectively, within the acceptable performance bounds of the simulation model. Improvements in resolution, contrast and contrast-to-noise ratio are demonstrated in vivo and in a fetal phantom. PMID:26863653

  12. Addressing Three Fallacies About Synthetic Aperture Radar

    NASA Astrophysics Data System (ADS)

    Atwood, Don; Garron, Jessica

    2013-12-01

    Synthetic aperture radar (SAR) has long been recognized as a valuable tool for real-time environmental analysis and understanding of the Earth's geophysical properties. With its ability to see through clouds and to image day and night in all seasons, it can provide high-resolution data when optical sensors cannot. This capability has enabled SAR scientists to delineate flooding events, assess earthquake damage, map forest fires, rescue trapped icebreakers, and identify the extent of oil spills.

  13. Exploiting Decorrelations In Synthetic-Aperture Radar

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A.; Villasenor, John D.

    1994-01-01

    Temporal decorrelation between synthetic-aperture-radar data acquired on subsequent passes along same or nearly same trajectory serves as measure of change in target scene. Based partly on mathematical models of statistics of correlations between first- and second-pass radar echoes. Also based partly on Fourier-transform relations between radar-system impulse response and decorrelation functions particularly those expressing decorrelation effects of rotation and horizontal shift of trajectories between two passes.

  14. Effective wavelength scaling of rectangular aperture antennas.

    PubMed

    Chen, Yuanyuan; Yu, Li; Zhang, Jiasen; Gordon, Reuven

    2015-04-20

    We investigate the resonances of aperture antennas from the visible to the terahertz regime, with comparison to comprehensive simulations. Simple piecewise analytic behavior is found for the wavelength scaling over the entire spectrum, with a linear regime through the visible and near-IR. This theory will serve as a useful and simple design tool for applications including biosensors, nonlinear plasmonics and surface enhanced spectroscopies. PMID:25969079

  15. Aperture modulated, translating bed total body irradiation

    SciTech Connect

    Hussain, Amjad; Villarreal-Barajas, Jose Eduardo; Dunscombe, Peter; Brown, Derek W.

    2011-02-15

    Purpose: Total body irradiation (TBI) techniques aim to deliver a uniform radiation dose to a patient with an irregular body contour and a heterogeneous density distribution to within {+-}10% of the prescribed dose. In the current article, the authors present a novel, aperture modulated, translating bed TBI (AMTBI) technique that produces a high degree of dose uniformity throughout the entire patient. Methods: The radiation beam is dynamically shaped in two dimensions using a multileaf collimator (MLC). The irregular surface compensation algorithm in the Eclipse treatment planning system is used for fluence optimization, which is performed based on penetration depth and internal inhomogeneities. Two optimal fluence maps (AP and PA) are generated and beam apertures are created to deliver these optimal fluences. During treatment, the patient/phantom is translated on a motorized bed close to the floor (source to bed distance: 204.5 cm) under a stationary radiation beam with 0 deg. gantry angle. The bed motion and dynamic beam apertures are synchronized. Results: The AMTBI technique produces a more homogeneous dose distribution than fixed open beam translating bed TBI. In phantom studies, the dose deviation along the midline is reduced from 10% to less than 5% of the prescribed dose in the longitudinal direction. Dose to the lung is reduced by more than 15% compared to the unshielded fixed open beam technique. At the lateral body edges, the dose received from the open beam technique was 20% higher than that prescribed at umbilicus midplane. With AMTBI the dose deviation in this same region is reduced to less than 3% of the prescribed dose. Validation of the technique was performed using thermoluminescent dosimeters in a Rando phantom. Agreement between calculation and measurement was better than 3% in all cases. Conclusions: A novel, translating bed, aperture modulated TBI technique that employs dynamically shaped MLC defined beams is shown to improve dose uniformity

  16. Code-multiplexed optical scanner

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Arain, Muzammil A.

    2003-03-01

    A three-dimensional (3-D) optical-scanning technique is proposed based on spatial optical phase code activation on an input beam. This code-multiplexed optical scanner (C-MOS) relies on holographically stored 3-D beam-forming information. Proof-of-concept C-MOS experimental results by use of a photorefractive crystal as a holographic medium generates eight beams representing a basic 3-D voxel element generated via a binary-code matrix of the Hadamard type. The experiment demonstrates the C-MOS features of no moving parts, beam-forming flexibility, and large centimeter-size apertures. A novel application of the C-MOS as an optical security lock is highlighted.

  17. Synthetic aperture radar processing with tiered subapertures

    SciTech Connect

    Doerry, A.W.

    1994-06-01

    Synthetic Aperture Radar (SAR) is used to form images that are maps of radar reflectivity of some scene of interest, from range soundings taken over some spatial aperture. Additionally, the range soundings are typically synthesized from a sampled frequency aperture. Efficient processing of the collected data necessitates using efficient digital signal processing techniques such as vector multiplies and fast implementations of the Discrete Fourier Transform. Inherent in image formation algorithms that use these is a trade-off between the size of the scene that can be acceptably imaged, and the resolution with which the image can be made. These limits arise from migration errors and spatially variant phase errors, and different algorithms mitigate these to varying degrees. Two fairly successful algorithms for airborne SARs are Polar Format processing, and Overlapped Subaperture (OSA) processing. This report introduces and summarizes the analysis of generalized Tiered Subaperture (TSA) techniques that are a superset of both Polar Format processing and OSA processing. It is shown how tiers of subapertures in both azimuth and range can effectively mitigate both migration errors and spatially variant phase errors to allow virtually arbitrary scene sizes, even in a dynamic motion environment.

  18. Conical Rotating Aperture Geometries In Digital Radiography

    NASA Astrophysics Data System (ADS)

    Rudin, Stephen; Bednarek, Daniel R.; Wong, Roland

    1981-11-01

    Applications of conical rotating aperture (RA) geometries to digital radiography are described. Two kinds of conical RA imaging systems are the conical scanning beam and the conical scanning grid assemblies. These assemblies comprise coaxial conical surface(s) the axis of which is collinear with the x-ray focal spot. This geometry allows accurate alignment and continuous focusing of the slits or the grid lines. Image receptors which use solid state photodiode arrays are described for each type of conical RA system: multiple linear arrays for the conical scanning beam assembly and multiple area arrays for the conical scanning grid assembly. The digital rotating-aperture systems combine the wide dynamic range characteristics of solid state detectors with the superior scatter-rejection advantages of scanned beam approaches. The high scanning-beam velocities attainable by the use of rotating apertures should make it possible to obtain digital images for those procedures such as chest radiography which require large fields of view and short exposure times.

  19. Restoring Aperture Profile At Sample Plane

    SciTech Connect

    Jackson, J L; Hackel, R P; Lungershausen, A W

    2003-08-03

    Off-line conditioning of full-size optics for the National Ignition Facility required a beam delivery system to allow conditioning lasers to rapidly raster scan samples while achieving several technical goals. The main purpose of the optical system designed was to reconstruct at the sample plane the flat beam profile found at the laser aperture with significant reductions in beam wander to improve scan times. Another design goal was the ability to vary the beam size at the sample to scan at different fluences while utilizing all of the laser power and minimizing processing time. An optical solution was developed using commercial off-the-shelf lenses. The system incorporates a six meter relay telescope and two sets of focusing optics. The spacing of the focusing optics is changed to allow the fluence on the sample to vary from 2 to 14 Joules per square centimeter in discrete steps. More importantly, these optics use the special properties of image relaying to image the aperture plane onto the sample to form a pupil relay with a beam profile corresponding almost exactly to the flat profile found at the aperture. A flat beam profile speeds scanning by providing a uniform intensity across a larger area on the sample. The relayed pupil plane is more stable with regards to jitter and beam wander. Image relaying also reduces other perturbations from diffraction, scatter, and focus conditions. Image relaying, laser conditioning, and the optical system designed to accomplish the stated goals are discussed.

  20. Outdoor synthetic aperture acoustic ground target measurements

    NASA Astrophysics Data System (ADS)

    Bishop, Steven; Ngaya, Therese-Ann; Vignola, Joe; Judge, John; Marble, Jay; Gugino, Peter; Soumekh, Mehrdad; Rosen, Erik

    2010-04-01

    A novel outdoor synthetic aperture acoustic (SAA) system consists of a microphone and loudspeaker traveling along a 6.3-meter rail system. This is an extension from a prior indoor laboratory measurement system in which selected targets were insonified while suspended in air. Here, the loudspeaker and microphone are aimed perpendicular to their direction of travel along the rail. The area next to the rail is insonified and the microphone records the reflected acoustic signal, while the travel of the transceiver along the rail creates a synthetic aperture allowing imaging of the scene. Ground surfaces consisted of weathered asphalt and short grass. Several surface-laid objects were arranged on the ground for SAA imaging. These included rocks, concrete masonry blocks, grout covered foam blocks; foliage obscured objects and several spherical canonical targets such as a bowling ball, and plastic and metal spheres. The measured data are processed and ground targets are further analyzed for characteristics and features amenable for discrimination. This paper includes a description of the measurement system, target descriptions, synthetic aperture processing approach and preliminary findings with respect to ground surface and target characteristics.

  1. Synthetic aperture imaging in ultrasound calibration

    NASA Astrophysics Data System (ADS)

    Ameri, Golafsoun; Baxter, John S. H.; McLeod, A. Jonathan; Jayaranthe, Uditha L.; Chen, Elvis C. S.; Peters, Terry M.

    2014-03-01

    Ultrasound calibration allows for ultrasound images to be incorporated into a variety of interventional applica­ tions. Traditional Z- bar calibration procedures rely on wired phantoms with an a priori known geometry. The line fiducials produce small, localized echoes which are then segmented from an array of ultrasound images from different tracked probe positions. In conventional B-mode ultrasound, the wires at greater depths appear blurred and are difficult to segment accurately, limiting the accuracy of ultrasound calibration. This paper presents a novel ultrasound calibration procedure that takes advantage of synthetic aperture imaging to reconstruct high resolution ultrasound images at arbitrary depths. In these images, line fiducials are much more readily and accu­ rately segmented, leading to decreased calibration error. The proposed calibration technique is compared to one based on B-mode ultrasound. The fiducial localization error was improved from 0.21mm in conventional B-mode images to 0.15mm in synthetic aperture images corresponding to an improvement of 29%. This resulted in an overall reduction of calibration error from a target registration error of 2.00mm to 1.78mm, an improvement of 11%. Synthetic aperture images display greatly improved segmentation capabilities due to their improved resolution and interpretability resulting in improved calibration.

  2. Diffraction contrast imaging using virtual apertures.

    PubMed

    Gammer, Christoph; Burak Ozdol, V; Liebscher, Christian H; Minor, Andrew M

    2015-08-01

    Two methods on how to obtain the full diffraction information from a sample region and the associated reconstruction of images or diffraction patterns using virtual apertures are demonstrated. In a STEM-based approach, diffraction patterns are recorded for each beam position using a small probe convergence angle. Similarly, a tilt series of TEM dark-field images is acquired. The resulting datasets allow the reconstruction of either electron diffraction patterns, or bright-, dark- or annular dark-field images using virtual apertures. The experimental procedures of both methods are presented in the paper and are applied to a precipitation strengthened and creep deformed ferritic alloy with a complex microstructure. The reconstructed virtual images are compared with conventional TEM images. The major advantage is that arbitrarily shaped virtual apertures generated with image processing software can be designed without facing any physical limitations. In addition, any virtual detector that is specifically designed according to the underlying crystal structure can be created to optimize image contrast. PMID:25840371

  3. The radiation from apertures in curved surfaces

    NASA Technical Reports Server (NTRS)

    Pathak, P. H.; Kouyoumjian, R. G.

    1973-01-01

    The geometrical theory of diffraction is extended to treat the radiation from apertures or slots in convex, perfectly-conducting surfaces. It is assumed that the tangential electric field in the aperture is known so that an equivalent, infinitesimal source can be defined at each point in the aperture. Surface rays emanate from this source which is a caustic of the ray system. A launching coefficient is introduced to describe the excitation of the surface ray modes. If the field radiated from the surface is desired, the ordinary diffraction coefficients are used to determine the field of the rays shed tangentially from the surface rays. The field of the surface ray modes is not the field on the surface; hence if the mutual coupling between slots is of interest, a second coefficient related to the launching coefficient must be employed. In the region adjacent to the shadow boundary, the component of the field directly radiated from the source is presented by Fock-type functions. In the illuminated region the incident radiation from the source (this does not include the diffracted field components) is treated by geometrical optics. This extension of the geometrical theory of diffraction is applied to calculate the radiation from slots on elliptic cylinders, spheres and spheroids.

  4. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  5. Cluster speckle structures through multiple apertures forming a closed curve

    NASA Astrophysics Data System (ADS)

    Mosso, E.; Tebaldi, M.; Lencina, A.; Bolognini, N.

    2010-04-01

    In this work, cluster-like speckle patterns are analyzed. These patterns are generated when a diffuser illuminated by coherent light is imaged by a lens having a pupil mask with multiple apertures forming a closed curve. We show that the cluster structure results from the complex modulation produced inside each speckle which is generated by multiple interferences of light through the apertures. In particular, when the apertures are uniformly distributed along a closed curve, the resulting image speckle cluster replicates the pupil aperture distribution. Experimental results and theoretical simulations show that cluster features depend on the apertures distribution and the size of the closed curves.

  6. Jacobi-Bessel Analysis Of Antennas With Elliptical Apertures.

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Y.

    1989-01-01

    Coordinate transformation improves convergence pattern analysis of elliptical-aperture antennas. Modified version of Jacobi-Bessel expansion for vector diffraction analysis of reflector antennas uses coordinate transformation to improve convergence with elliptical apertures. Expansion converges rapidly for antennas with circular apertures, but less rapidly for elliptical apertures. Difference in convergence behavior between circular and elliptical Jacobi-Bessel algorithms indicated by highest values of indices m, n, and p required to achieve same accuracy in computed radiation pattern of offset paraboloidal antenna with elliptical aperture.

  7. The sensitivity of synthetic aperture radiometers for remote sensing applications from space

    NASA Technical Reports Server (NTRS)

    Levine, D. M.

    1989-01-01

    Aperture synthesis offers a means of realizing the full potential of microwave remote sensing from space by helping to overcome the limitations set by antenna size. The result is a potentially lighter, more adaptable structure for applications in space. However, because the physical collecting area is reduced, the signal-to-noise ratio is reduced and may adversely affect the radiometric sensitivity. Sensitivity is an especially critical issue for measurements to be made from low earth orbit because the motion of the platform limits the integration time available for forming an image. The purpose is to develop expression for the sensitivity of remote sensing systems which use aperture synthesis. The objective is to develop basic equations general enough to be used to obtain the sensitivity of the several variations of aperture synthesis which were proposed for sensors in space. The conventional microwave imager (a scanning total power radiometer) is treated as a special case and a comparison of three synthetic aperture configurations with the conventional imager is presented.

  8. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  9. Ten Recent Enhancements To Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ; Rebull, L. M.; Gorjian, V.

    2013-01-01

    Aperture Photometry Tool is free, multi-platform, easy-to-install software for astronomical research, as well as for learning, visualizing, and refining aperture-photometry analyses. This mature software has been under development for five years, and is a silent workhorse of the NASA/IPAC Teacher Archive Research Program. Software version 2.1.5 is described by Laher et al., Publications of the Astronomical Society of the Pacific, Vol. 124, No. 917, pp. 737-763, (July 2012). Four software upgrades have been released since the publication, which include new capabilities, increased speed, more user-friendliness, and some minor bug fixes. Visit www.aperturephotometry.org to download the latest version. The enhancements are as follows: 1) Added new Tools menu option to write selected primary-image data to a comma-separated-value file (for importing into Excel); 2) Added a new display of the color-table levels on a separate panel; 3) Added a new tool to measure the angular separation between positions on the thumbnail image, via mouse-cursor drag and release; 4) Added a new tool to overlay an aperture at user-specified coordinates (in addition to aperture overlay via mouse click); 5) Speeded up the source-list tool with optional multithreading in its automatic mode (allowed thread number is user-specifiable); 6) Added a new “Number” column to the output aperture-photometry-table file in order to track the input source order (multithreading reorders the output); 7) Upgraded the source-list tool to accept input source lists containing positions in sexagesimal equatorial coordinates (in addition to decimal degrees, or, alternatively, pixel coordinates); 8) Added a new decimal/sexagesimal converter; 9) Upgraded the source-list creation tool to compute the detection threshold using robust estimates of the local background and local data dispersion, where the user can select the grid and window sizes for these local calculations; and 10) Modified the batch mode to

  10. Experimental instrumentation system for the Phased Array Mirror Extendible Large Aperture (PAMELA) test program

    NASA Technical Reports Server (NTRS)

    Boykin, William H., Jr.

    1993-01-01

    Adaptive optics are used in telescopes for both viewing objects with minimum distortion and for transmitting laser beams with minimum beam divergence and dance. In order to test concepts on a smaller scale, NASA MSFC is in the process of setting up an adaptive optics test facility with precision (fraction of wavelengths) measurement equipment. The initial system under test is the adaptive optical telescope called PAMELA (Phased Array Mirror Extendible Large Aperture). Goals of this test are: assessment of test hardware specifications for PAMELA application and the determination of the sensitivities of instruments for measuring PAMELA (and other adaptive optical telescopes) imperfections; evaluation of the PAMELA system integration effort and test progress and recommended actions to enhance these activities; and development of concepts and prototypes of experimental apparatuses for PAMELA.

  11. A synthetic aperture study of aperture size in the presence of noise and in vivo clutter

    NASA Astrophysics Data System (ADS)

    Bottenus, Nick; Byram, Brett C.; Trahey, Gregg E.

    2013-03-01

    Conventional wisdom in ultrasonic array design drives development towards larger arrays because of the inverse relationship between aperture size and resolution. We propose a method using synthetic aperture beamforming to study image quality as a function of aperture size in simulation, in a phantom and in vivo. A single data acquisition can be beamformed to produce matched images with a range of aperture sizes, even in the presence of target motion. In this framework we evaluate the reliability of typical image quality metrics - speckle signal-tonoise ratio, contrast and contrast-to-noise ratio - for use in in vivo studies. Phantom and simulation studies are in good agreement in that there exists a point of diminishing returns in image quality at larger aperture sizes. We demonstrate challenges in applying and interpreting these metrics in vivo, showing results in hypoechoic vasculature regions. We explore the use of speckle brightness to describe image quality in the presence of in vivo clutter and underlying tissue inhomogeneities.

  12. Imaging performance of annular apertures. II - Line spread functions

    NASA Technical Reports Server (NTRS)

    Tschunko, H. F. A.

    1978-01-01

    Line images formed by aberration-free optical systems with annular apertures are investigated in the whole range of central obstruction ratios. Annular apertures form lines images with central and side line groups. The number of lines in each line group is given by the ratio of the outer diameter of the annular aperture divided by the width of the annulus. The theoretical energy fraction of 0.889 in the central line of the image formed by an unobstructed aperture increases for centrally obstructed apertures to 0.932 for the central line group. Energy fractions for the central and side line groups are practically constant for all obstruction ratios and for each line group. The illumination of rectangular secondary apertures of various length/width ratios by apertures of various obstruction ratios is discussed.

  13. Two-Dimensional Synthetic-Aperture Radiometer

    NASA Technical Reports Server (NTRS)

    LeVine, David M.

    2010-01-01

    A two-dimensional synthetic-aperture radiometer, now undergoing development, serves as a test bed for demonstrating the potential of aperture synthesis for remote sensing of the Earth, particularly for measuring spatial distributions of soil moisture and ocean-surface salinity. The goal is to use the technology for remote sensing aboard a spacecraft in orbit, but the basic principles of design and operation are applicable to remote sensing from aboard an aircraft, and the prototype of the system under development is designed for operation aboard an aircraft. In aperture synthesis, one utilizes several small antennas in combination with a signal processing in order to obtain resolution that otherwise would require the use of an antenna with a larger aperture (and, hence, potentially more difficult to deploy in space). The principle upon which this system is based is similar to that of Earth-rotation aperture synthesis employed in radio astronomy. In this technology the coherent products (correlations) of signals from pairs of antennas are obtained at different antenna-pair spacings (baselines). The correlation for each baseline yields a sample point in a Fourier transform of the brightness-temperature map of the scene. An image of the scene itself is then reconstructed by inverting the sampled transform. The predecessor of the present two-dimensional synthetic-aperture radiometer is a one-dimensional one, named the Electrically Scanned Thinned Array Radiometer (ESTAR). Operating in the L band, the ESTAR employs aperture synthesis in the cross-track dimension only, while using a conventional antenna for resolution in the along-track dimension. The two-dimensional instrument also operates in the L band to be precise, at a frequency of 1.413 GHz in the frequency band restricted for passive use (no transmission) only. The L band was chosen because (1) the L band represents the long-wavelength end of the remote- sensing spectrum, where the problem of achieving adequate

  14. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  15. Construction of a 56 mm aperture high-field twin-aperture superconducting dipole model magnet

    SciTech Connect

    Ahlbaeck, J; Leroy, D.; Oberli, L.; Perini, D.; Salminen, J.; Savelainen, M.; Soini, J.; Spigo, G.

    1996-07-01

    A twin-aperture superconducting dipole model has been designed in collaboration with Finnish and Swedish Scientific Institutions within the framework of the LHC R and D program and has been built at CERN. Principal features of the magnet are 56 mm aperture, separate stainless steel collared coils, yoke closed after assembly at room temperature, and longitudinal prestressing of the coil ends. This paper recalls the main dipole design characteristics and presents some details of its fabrication including geometrical and mechanical measurements of the collared coil assembly.

  16. Fast parametric beamformer for synthetic aperture imaging.

    PubMed

    Nikolov, Svetoslav Ivanov; Jensen, Jørgen Arendt; Tomov, Borislav Gueorguiev

    2008-08-01

    This paper describes the design and implementation of a real-time delay-and-sum synthetic aperture beamformer. The beamforming delays and apodization coefficients are described parametrically. The image is viewed as a set of independent lines that are defined in 3D by their origin, direction, and inter-sample distance. The delay calculation is recursive and inspired by the coordinate rotation digital computer (CORDIC) algorithm. Only 3 parameters per channel and line are needed for their generation. The calculation of apodization coefficients is based on a piece- wise linear approximation. The implementation of the beamformer is optimized with respect to the architecture of a novel synthetic aperture real-time ultrasound scanner (SARUS), in which 4 channels are processed by the same set of field-programmable gate arrays (FPGA). In synthetic transmit aperture imaging, low-resolution images are formed after every emission. Summing all low-resolution images produces a perfectly focused high-resolution image. The design of the beamformer is modular, and a single beamformation unit can produce 4600 low-resolution images per second, each consisting of 32 lines and 1024 complex samples per line. In its present incarnation, 3 such modules fit in a single device. The summation of low-resolution images is performed internally in the FPGA to reduce the required bandwidth. The delays are calculated with a precision of 1/16th of a sample, and the apodization coefficients with 7-bit precision. The accumulation of low-resolution images is performed with 24-bit precision. The level of the side- and grating lobes, introduced by the use of integer numbers in the calculations and truncation of intermediate results, is below -86 dB from the peak. PMID:18986919

  17. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  18. Multi-carrier synthetic aperture communication in shallow water: experimental results.

    PubMed

    Kang, Taehyuk; Song, H C; Hodgkiss, W S

    2011-12-01

    Orthogonal frequency division multiplexing (OFDM) communications in the presence of motion is investigated using data collected from the Kauai Acomms MURI 2008 (KAM08) experiment, conducted off the western side of Kauai, Hawaii, in June-July 2008. The experiment involved a vertical array moored in 106 m deep shallow water and a source towed at a speed of 3 knots at ranges between 600 m and 6 km. In order to attain reliable communications with only a single receive element, a synthetic aperture approach is applied. After combining multiple transmissions, an error-free reception is achieved with a low-density parity-check code, confirming the feasibility of coherent synthetic aperture communications using OFDM. PMID:22225037

  19. General implementation of thin-slot algorithms into the finite-difference time-domain code, TSAR, based on a slot data file

    SciTech Connect

    Riley, D.J.; Turner, C.D.

    1991-06-01

    Two methods for modeling arbitrary narrow apertures in finite- difference time-domain (FDTD) codes are presented in this paper. The first technique is based on the hybrid thin-slot algorithm (HTSA) which models the aperture physics using an integral equation approach. This method can model slots that are narrow both in width and depth with regard to the FDTD spatial cell, but is restricted to planar apertures. The second method is based on a contour technique that directly modifies the FDTD equations local to the aperture. The contour method is geometrically more flexible than the HTSA, but the depth of the aperture is restricted to the actual FDTD mesh. A technique to incorporate both narrow-aperture algorithms into the FDTD code, TSAR, based on a slot data file'' is presented in this paper. Results for a variety of complex aperture contours are provided, and limitations of the algorithms are discussed.

  20. Imaging correlography with sparse collecting apertures

    NASA Astrophysics Data System (ADS)

    Idell, Paul S.; Fienup, J. R.

    1987-01-01

    This paper investigates the possibility of implementing an imaging correlography system with sparse arrays of intensity detectors. The theory underlying the image formation process for imaging correlography is reviewed, emphasizing the spatial filtering effects that sparse collecting apertures have on the reconstructed imagery. Image recovery with sparse arrays of intensity detectors through the use of computer experiments in which laser speckle measurements are digitally simulated is then demonstrated. It is shown that the quality of imagery reconstructed using this technique is visibly enhanced when appropriate filtering techniques are applied. A performance tradeoff between collecting array redundancy and the number of speckle pattern measurements is briefly discussed.