Science.gov

Sample records for adaptive coded aperture

  1. Adaptive uniform grayscale coded aperture design for high dynamic range compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Diaz, Nelson; Rueda, Hoover; Arguello, Henry

    2016-05-01

    Imaging spectroscopy is an important area with many applications in surveillance, agriculture and medicine. The disadvantage of conventional spectroscopy techniques is that they collect the whole datacube. In contrast, compressive spectral imaging systems capture snapshot compressive projections, which are the input of reconstruction algorithms to yield the underlying datacube. Common compressive spectral imagers use coded apertures to perform the coded projections. The coded apertures are the key elements in these imagers since they define the sensing matrix of the system. The proper design of the coded aperture entries leads to a good quality in the reconstruction. In addition, the compressive measurements are prone to saturation due to the limited dynamic range of the sensor, hence the design of coded apertures must consider saturation. The saturation errors in compressive measurements are unbounded and compressive sensing recovery algorithms only provide solutions for bounded noise or bounded with high probability. In this paper it is proposed the design of uniform adaptive grayscale coded apertures (UAGCA) to improve the dynamic range of the estimated spectral images by reducing the saturation levels. The saturation is attenuated between snapshots using an adaptive filter which updates the entries of the grayscale coded aperture based on the previous snapshots. The coded apertures are optimized in terms of transmittance and number of grayscale levels. The advantage of the proposed method is the efficient use of the dynamic range of the image sensor. Extensive simulations show improvements in the image reconstruction of the proposed method compared with grayscale coded apertures (UGCA) and adaptive block-unblock coded apertures (ABCA) in up to 10 dB.

  2. IR performance study of an adaptive coded aperture "diffractive imaging" system employing MEMS "eyelid shutter" technologies

    NASA Astrophysics Data System (ADS)

    Mahalanobis, A.; Reyner, C.; Patel, H.; Haberfelde, T.; Brady, David; Neifeld, Mark; Kumar, B. V. K. Vijaya; Rogers, Stanley

    2007-09-01

    Adaptive coded aperture sensing is an emerging technology enabling real time, wide-area IR/visible sensing and imaging. Exploiting unique imaging architectures, adaptive coded aperture sensors achieve wide field of view, near-instantaneous optical path repositioning, and high resolution while reducing weight, power consumption and cost of air- and space born sensors. Such sensors may be used for military, civilian, or commercial applications in all optical bands but there is special interest in diffraction imaging sensors for IR applications. Extension of coded apertures from Visible to the MWIR introduces the effects of diffraction and other distortions not observed in shorter wavelength systems. A new approach is being developed under the DARPA/SPO funded LACOSTE (Large Area Coverage Optical search-while Track and Engage) program, that addresses the effects of diffraction while gaining the benefits of coded apertures, thus providing flexibility to vary resolution, possess sufficient light gathering power, and achieve a wide field of view (WFOV). The photonic MEMS-Eyelid "sub-aperture" array technology is currently being instantiated in this DARPA program to be the heart of conducting the flow (heartbeat) of the incoming signal. However, packaging and scalability are critical factors for the MEMS "sub-aperture" technology which will determine system efficacy as well as military and commercial usefulness. As larger arrays with 1,000,000+ sub-apertures are produced for this LACOSTE effort, the available Degrees of Freedom (DOF) will enable better spatial resolution, control and refinement on the coding for the system. Studies (SNR simulations) will be performed (based on the Adaptive Coded Aperture algorithm implementation) to determine the efficacy of this diffractive MEMS approach and to determine the available system budget based on simulated bi-static shutter-element DOF degradation (1%, 5%, 10%, 20%, etc..) trials until the degradation level where it is

  3. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  4. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot.

  5. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  6. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  7. Class of near-perfect coded apertures

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.

  8. 1D fast coded aperture camera.

    PubMed

    Haw, Magnus; Bellan, Paul

    2015-04-01

    A fast (100 MHz) 1D coded aperture visible light camera has been developed as a prototype for imaging plasma experiments in the EUV/X-ray bands. The system uses printed patterns on transparency sheets as the masked aperture and an 80 channel photodiode array (9 V reverse bias) as the detector. In the low signal limit, the system has demonstrated 40-fold increase in throughput and a signal-to-noise gain of ≈7 over that of a pinhole camera of equivalent parameters. In its present iteration, the camera can only image visible light; however, the only modifications needed to make the system EUV/X-ray sensitive are to acquire appropriate EUV/X-ray photodiodes and to machine a metal masked aperture. PMID:25933861

  9. 1D fast coded aperture camera.

    PubMed

    Haw, Magnus; Bellan, Paul

    2015-04-01

    A fast (100 MHz) 1D coded aperture visible light camera has been developed as a prototype for imaging plasma experiments in the EUV/X-ray bands. The system uses printed patterns on transparency sheets as the masked aperture and an 80 channel photodiode array (9 V reverse bias) as the detector. In the low signal limit, the system has demonstrated 40-fold increase in throughput and a signal-to-noise gain of ≈7 over that of a pinhole camera of equivalent parameters. In its present iteration, the camera can only image visible light; however, the only modifications needed to make the system EUV/X-ray sensitive are to acquire appropriate EUV/X-ray photodiodes and to machine a metal masked aperture.

  10. Coded-aperture imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-11-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  11. Coded-aperture imaging in nuclear medicine

    NASA Technical Reports Server (NTRS)

    Smith, Warren E.; Barrett, Harrison H.; Aarsvold, John N.

    1989-01-01

    Coded-aperture imaging is a technique for imaging sources that emit high-energy radiation. This type of imaging involves shadow casting and not reflection or refraction. High-energy sources exist in x ray and gamma-ray astronomy, nuclear reactor fuel-rod imaging, and nuclear medicine. Of these three areas nuclear medicine is perhaps the most challenging because of the limited amount of radiation available and because a three-dimensional source distribution is to be determined. In nuclear medicine a radioactive pharmaceutical is administered to a patient. The pharmaceutical is designed to be taken up by a particular organ of interest, and its distribution provides clinical information about the function of the organ, or the presence of lesions within the organ. This distribution is determined from spatial measurements of the radiation emitted by the radiopharmaceutical. The principles of imaging radiopharmaceutical distributions with coded apertures are reviewed. Included is a discussion of linear shift-variant projection operators and the associated inverse problem. A system developed at the University of Arizona in Tucson consisting of small modular gamma-ray cameras fitted with coded apertures is described.

  12. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led

  13. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  14. Coded aperture devices for viewing extended objects from space

    NASA Technical Reports Server (NTRS)

    Curtis, C. C.; Hsieh, K. C.; Sandel, B. R.; Drake, V. A.

    1992-01-01

    Coded aperture sensors for photons or energetic neutral atoms (ENAs), which incorporate FOV limiters and subdivide the object field into a number of elements which is smaller than the number of detector pixels, are described. A least squares fit to the data is made in reconstructing the object field. To evaluate the optics and reconstruction algorithms, two 'breadboard' sensors have been constructed, one based on a film camera and the other upon a UV-light sensitive microchannel plate detector system. Results obtained thus far show that the concept is viable, and no special difficulties should be encountered in adapting the detector geometries to neutral particle imaging systems. Charged particle deflection plates could be incorporated into the region between the FOV limiter and the aperture, or installed ahead of the limiter.

  15. Adaptation and visual coding

    PubMed Central

    Webster, Michael A.

    2011-01-01

    Visual coding is a highly dynamic process and continuously adapting to the current viewing context. The perceptual changes that result from adaptation to recently viewed stimuli remain a powerful and popular tool for analyzing sensory mechanisms and plasticity. Over the last decade, the footprints of this adaptation have been tracked to both higher and lower levels of the visual pathway and over a wider range of timescales, revealing that visual processing is much more adaptable than previously thought. This work has also revealed that the pattern of aftereffects is similar across many stimulus dimensions, pointing to common coding principles in which adaptation plays a central role. However, why visual coding adapts has yet to be fully answered. PMID:21602298

  16. Large aperture adaptive optics for intense lasers

    NASA Astrophysics Data System (ADS)

    Deneuville, François; Ropert, Laurent; Sauvageot, Paul; Theis, Sébastien

    2015-05-01

    ISP SYSTEM has developed a range of large aperture electro-mechanical deformable mirrors (DM) suitable for ultra short pulsed intense lasers. The design of the MD-AME deformable mirror is based on force application on numerous locations thanks to electromechanical actuators driven by stepper motors. DM design and assembly method have been adapted to large aperture beams and the performances were evaluated on a first application for a beam with a diameter of 250mm at 45° angle of incidence. A Strehl ratio above 0.9 was reached for this application. Simulations were correlated with measurements on optical bench and the design has been validated by calculation for very large aperture (up to Ø550mm). Optical aberrations up to Zernike order 5 can be corrected with a very low residual error as for actual MD-AME mirror. Amplitude can reach up to several hundreds of μm for low order corrections. Hysteresis is lower than 0.1% and linearity better than 99%. Contrary to piezo-electric actuators, the μ-AME actuators avoid print-through effects and they permit to keep the mirror shape stable even unpowered, providing a high resistance to electro-magnetic pulses. The MD-AME mirrors can be adapted to circular, square or elliptical beams and they are compatible with all dielectric or metallic coatings.

  17. Coded aperture imaging for fluorescent x-rays

    SciTech Connect

    Haboub, A.; MacDowell, A. A.; Marchesini, S.; Parkinson, D. Y.

    2014-06-15

    We employ a coded aperture pattern in front of a pixilated charge couple device detector to image fluorescent x-rays (6–25 KeV) from samples irradiated with synchrotron radiation. Coded apertures encode the angular direction of x-rays, and given a known source plane, allow for a large numerical aperture x-ray imaging system. The algorithm to develop and fabricate the free standing No-Two-Holes-Touching aperture pattern was developed. The algorithms to reconstruct the x-ray image from the recorded encoded pattern were developed by means of a ray tracing technique and confirmed by experiments on standard samples.

  18. Coded Aperture Imaging for Fluorescent X-rays-Biomedical Applications

    SciTech Connect

    Haboub, Abdel; MacDowell, Alastair; Marchesini, Stefano; Parkinson, Dilworth

    2013-06-01

    Employing a coded aperture pattern in front of a charge couple device pixilated detector (CCD) allows for imaging of fluorescent x-rays (6-25KeV) being emitted from samples irradiated with x-rays. Coded apertures encode the angular direction of x-rays and allow for a large Numerical Aperture x- ray imaging system. The algorithm to develop the self-supported coded aperture pattern of the Non Two Holes Touching (NTHT) pattern was developed. The algorithms to reconstruct the x-ray image from the encoded pattern recorded were developed by means of modeling and confirmed by experiments. Samples were irradiated by monochromatic synchrotron x-ray radiation, and fluorescent x-rays from several different test metal samples were imaged through the newly developed coded aperture imaging system. By choice of the exciting energy the different metals were speciated.

  19. Development of large aperture composite adaptive optics

    NASA Astrophysics Data System (ADS)

    Kmetik, Viliam; Vitovec, Bohumil; Jiran, Lukas; Nemcova, Sarka; Zicha, Josef; Inneman, Adolf; Mikulickova, Lenka; Pavlica, Richard

    2015-01-01

    Large aperture composite adaptive optics for laser applications is investigated in cooperation of Institute of Plasma Physic, Department of Instrumentation and Control Engineering FME CTU and 5M Ltd. We are exploring opportunity of a large-size high-power-laser deformable-mirror production using a lightweight bimorph actuated structure with a composite core. In order to produce a sufficiently large operational free aperture we are developing new technologies for production of flexible core, bimorph actuator and deformable mirror reflector. Full simulation of a deformable-mirrors structure was prepared and validated by complex testing. A deformable mirror actuation and a response of a complicated structure are investigated for an accurate control of the adaptive optics. An original adaptive optics control system and a bimorph deformable mirror driver were developed. Tests of material samples, components and sub-assemblies were completed. A subscale 120 mm bimorph deformable mirror prototype was designed, fabricated and thoroughly tested. A large-size 300 mm composite-core bimorph deformable mirror was simulated and optimized, fabrication of a prototype is carried on. A measurement and testing facility is modified to accommodate large sizes optics.

  20. Multi-view coded aperture coherent scatter tomography

    NASA Astrophysics Data System (ADS)

    Holmgren, Andrew D.; Odinaka, Ikenna; Greenberg, Joel A.; Brady, David J.

    2016-05-01

    We use coded apertures and multiple views to create a compressive coherent scatter computed tomography (CSCT) system. Compared with other CSCT systems, we reduce object dose and scan time. Previous work on coded aperture tomography resulted in a resolution anisotropy that caused poor or unusable momentum transfer resolution in certain cases. Complimentary and multiple views resolve the resolution issues, while still providing the ability to perform snapshot tomography by adding sources and detectors.

  1. Adaptive SPECT imaging with crossed-slit apertures

    PubMed Central

    Durko, Heather L.; Furenlid, Lars R.

    2015-01-01

    Preclinical single-photon emission computed tomography (SPECT) is an essential tool for studying the progression, response to treatment, and physiological changes in small animal models of human disease. The wide range of imaging applications is often limited by the static design of many preclinical SPECT systems. We have developed a prototype imaging system that replaces the standard static pinhole aperture with two sets of movable, keel-edged copper-tungsten blades configured as crossed (skewed) slits. These apertures can be positioned independently between the object and detector, producing a continuum of imaging configurations in which the axial and transaxial magnifications are not constrained to be equal. We incorporated a megapixel silicon double-sided strip detector to permit ultrahigh-resolution imaging. We describe the configuration of the adjustable slit aperture imaging system and discuss its application toward adaptive imaging, and reconstruction techniques using an accurate imaging forward model, a novel geometric calibration technique, and a GPU-based ultra-high-resolution reconstruction code. PMID:26190884

  2. Code aperture optimization for spectrally agile compressive imaging.

    PubMed

    Arguello, Henry; Arce, Gonzalo R

    2011-11-01

    Coded aperture snapshot spectral imaging (CASSI) provides a mechanism for capturing a 3D spectral cube with a single shot 2D measurement. In many applications selective spectral imaging is sought since relevant information often lies within a subset of spectral bands. Capturing and reconstructing all the spectral bands in the observed image cube, to then throw away a large portion of this data, is inefficient. To this end, this paper extends the concept of CASSI to a system admitting multiple shot measurements, which leads not only to higher quality of reconstruction but also to spectrally selective imaging when the sequence of code aperture patterns is optimized. The aperture code optimization problem is shown to be analogous to the optimization of a constrained multichannel filter bank. The optimal code apertures allow the decomposition of the CASSI measurement into several subsets, each having information from only a few selected spectral bands. The rich theory of compressive sensing is used to effectively reconstruct the spectral bands of interest from the measurements. A number of simulations are developed to illustrate the spectral imaging characteristics attained by optimal aperture codes.

  3. Coded aperture imaging with a HURA coded aperture and a discrete pixel detector

    NASA Astrophysics Data System (ADS)

    Byard, Kevin

    An investigation into the gamma ray imaging properties of a hexagonal uniformly redundant array (HURA) coded aperture and a detector consisting of discrete pixels constituted the major research effort. Such a system offers distinct advantages for the development of advanced gamma ray astronomical telescopes in terms of the provision of high quality sky images in conjunction with an imager plane which has the capacity to reject background noise efficiently. Much of the research was performed as part of the European Space Agency (ESA) sponsored study into a prospective space astronomy mission, GRASP. The effort involved both computer simulations and a series of laboratory test images. A detailed analysis of the system point spread function (SPSF) of imaging planes which incorporate discrete pixel arrays is presented and the imaging quality quantified in terms of the signal to noise ratio (SNR). Computer simulations of weak point sources in the presence of detector background noise were also investigated. Theories developed during the study were evaluated by a series of experimental measurements with a Co-57 gamma ray point source, an Anger camera detector, and a rotating HURA mask. These tests were complemented by computer simulations designed to reproduce, as close as possible, the experimental conditions. The 60 degree antisymmetry property of HURA's was also employed to remove noise due to detector systematic effects present in the experimental images, and rendered a more realistic comparison of the laboratory tests with the computer simulations. Plateau removal and weighted deconvolution techniques were also investigated as methods for the reduction of the coding error noise associated with the gamma ray images.

  4. Two-dimensional aperture coding for magnetic sector mass spectrometry.

    PubMed

    Russell, Zachary E; Chen, Evan X; Amsden, Jason J; Wolter, Scott D; Danell, Ryan M; Parker, Charles B; Stoner, Brian R; Gehm, Michael E; Brady, David J; Glass, Jeffrey T

    2015-02-01

    In mass spectrometer design, there has been a historic belief that there exists a fundamental trade-off between instrument size, throughput, and resolution. When miniaturizing a traditional system, performance loss in either resolution or throughput would be expected. However, in optical spectroscopy, both one-dimensional (1D) and two-dimensional (2D) aperture coding have been used for many years to break a similar trade-off. To provide a viable path to miniaturization for harsh environment field applications, we are investigating similar concepts in sector mass spectrometry. Recently, we demonstrated the viability of 1D aperture coding and here we provide a first investigation of 2D coding. In coded optical spectroscopy, 2D coding is preferred because of increased measurement diversity for improved conditioning and robustness of the result. To investigate its viability in mass spectrometry, analytes of argon, acetone, and ethanol were detected using a custom 90-degree magnetic sector mass spectrometer incorporating 2D coded apertures. We developed a mathematical forward model and reconstruction algorithm to successfully reconstruct the mass spectra from the 2D spatially coded ion positions. This 2D coding enabled a 3.5× throughput increase with minimal decrease in resolution. Several challenges were overcome in the mass spectrometer design to enable this coding, including the need for large uniform ion flux, a wide gap magnetic sector that maintains field uniformity, and a high resolution 2D detection system for ion imaging. Furthermore, micro-fabricated 2D coded apertures incorporating support structures were developed to provide a viable design that allowed ion transmission through the open elements of the code. PMID:25510933

  5. Telescope Adaptive Optics Code

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  6. Vision aided inertial navigation system augmented with a coded aperture

    NASA Astrophysics Data System (ADS)

    Morrison, Jamie R.

    Navigation through a three-dimensional indoor environment is a formidable challenge for an autonomous micro air vehicle. A main obstacle to indoor navigation is maintaining a robust navigation solution (i.e. air vehicle position and attitude estimates) given the inadequate access to satellite positioning information. A MEMS (micro-electro-mechanical system) based inertial navigation system provides a small, power efficient means of maintaining a vehicle navigation solution; however, unmitigated error propagation from relatively noisy MEMS sensors results in the loss of a usable navigation solution over a short period of time. Several navigation systems use camera imagery to diminish error propagation by measuring the direction to features in the environment. Changes in feature direction provide information regarding direction for vehicle movement, but not the scale of movement. Movement scale information is contained in the depth to the features. Depth-from-defocus is a classic technique proposed to derive depth from a single image that involves analysis of the blur inherent in a scene with a narrow depth of field. A challenge to this method is distinguishing blurriness caused by the focal blur from blurriness inherent to the observed scene. In 2007, MIT's Computer Science and Artificial Intelligence Laboratory demonstrated replacing the traditional rounded aperture with a coded aperture to produce a complex blur pattern that is more easily distinguished from the scene. A key to measuring depth using a coded aperture then is to correctly match the blur pattern in a region of the scene with a previously determined set of blur patterns for known depths. As the depth increases from the focal plane of the camera, the observable change in the blur pattern for small changes in depth is generally reduced. Consequently, as the depth of a feature to be measured using a depth-from-defocus technique increases, the measurement performance decreases. However, a Fresnel zone

  7. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    NASA Astrophysics Data System (ADS)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  8. Design of a coded aperture Compton telescope imaging system (CACTIS)

    NASA Astrophysics Data System (ADS)

    Volkovskii, Alexander; Clajus, Martin; Gottesman, Stephen R.; Malik, Hans; Schwartz, Kenneth; Tumer, Evren; Tumer, Tumay; Yin, Shi

    2010-08-01

    We have developed a prototype of a scalable high-resolution direction and energy sensitive gamma-ray detection system that operates in both coded aperture (CA) and Compton scatter (CS) modes to obtain optimal efficiency and angular resolution over a wide energy range. The design consists of an active coded aperture constructed from 52 individual CZT planar detectors each measuring 3×3×6 mm3 arranged in a MURA pattern on a 10×10 grid, with a monolithic 20×20×5 mm3 pixelated (8×8) CZT array serving as the focal plane. The combined mode is achieved by using the aperture plane array for both Compton scattering of high-energy photons and as a coded mask for low-energy radiation. The prototype instrument was built using two RENA-3 test systems, one each for the aperture and the focal plane, stacked on top of each other at a distance of 130 mm. The test systems were modified to coordinate (synchronize) readout and provide coincidence information of events within a user-adjustable 40-1,280 ns window. The measured angular resolution of the device is <1 deg (17 mrad) in CA mode and is predicted to be approximately 3 deg (54 mrad) in CS mode. The energy resolution of the CZT detectors is approximately 5% FWHM at 120 keV. We will present details of the system design and initial results for the calibration and performance of the prototype.

  9. Lensless coded aperture imaging with separable doubly Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2014-05-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within reasonable weight and volume. One solution is to use coded apertures -opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, the images can be decoded, and a clear image computed. Recently, computational imaging and the search for means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, MURAs (Modified Uniformly Redundant Arrays) are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: 1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and 2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. This paper presents a new class of coded apertures, Separable Doubly-Toeplitz masks, which are efficiently decodable, even for very large images -orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-lightmodulators. Imaging experiments confirmed the effectiveness of Separable Doubly-Toeplitz masks - images collected in natural light of extended outdoor scenes are rendered clearly.

  10. High-Resolution Light Field Capture With Coded Aperture.

    PubMed

    Wang, Yu-Ping; Wang, Li-Chun; Kong, De-Hui; Yin, Bao-Cai

    2015-12-01

    Acquiring light field with larger angular resolution and higher spatial resolution in low cost is the goal of light field capture. Combining or modifying traditional optical cameras is a usual method for designing light field capture equipment, among which most models should deliberate trade-off between angular and spatial resolution, but augmenting coded aperture avoids this consideration by multiplexing information from different views. On the basis of coded aperture, this paper suggests an improved light field camera model that has double measurements and one mask. The two compressive measurements are respectively realized by a coded aperture and a random convolution CMOS imager, the latter is used as imaging sensor of the camera. The single mask design permits high light efficiency, which enables the sampling images to have high clarity. The double measurement design keeps more correlation information, which is conductive to enhancing the reconstructed light field. The higher clarity and more correlation of samplings mean higher quality of rebuilt light field, which also means higher resolution under condition of a lower PSNR requirement for rebuilt light field. Experimental results have verified advantage of the proposed design: compared with the representative mask-based light field camera models, the proposed model has the highest reconstruction quality and a higher light efficiency.

  11. Colored coded-apertures for spectral image unmixing

    NASA Astrophysics Data System (ADS)

    Vargas, Hector M.; Arguello Fuentes, Henry

    2015-10-01

    Hyperspectral remote sensing technology provides detailed spectral information from every pixel in an image. Due to the low spatial resolution of hyperspectral image sensors, and the presence of multiple materials in a scene, each pixel can contain more than one spectral signature. Therefore, endmember extraction is used to determine the pure spectral signature of the mixed materials and its corresponding abundance map in a remotely sensed hyperspectral scene. Advanced endmember extraction algorithms have been proposed to solve this linear problem called spectral unmixing. However, such techniques require the acquisition of the complete hyperspectral data cube to perform the unmixing procedure. Researchers show that using colored coded-apertures improve the quality of reconstruction in compressive spectral imaging (CSI) systems under compressive sensing theory (CS). This work aims at developing a compressive supervised spectral unmixing scheme to estimate the endmembers and the abundance map from compressive measurements. The compressive measurements are acquired by using colored coded-apertures in a compressive spectral imaging system. Then a numerical procedure estimates the sparse vector representation in a 3D dictionary by solving a constrained sparse optimization problem. The 3D dictionary is formed by a 2-D wavelet basis and a known endmembers spectral library, where the Wavelet basis is used to exploit the spatial information. The colored coded-apertures are designed such that the sensing matrix satisfies the restricted isometry property with high probability. Simulations show that the proposed scheme attains comparable results to the full data cube unmixing technique, but using fewer measurements.

  12. Imaging plasmas with coded aperture methods instead of conventional optics

    NASA Astrophysics Data System (ADS)

    Wongwaitayakornkul, Pakorn; Bellan, Paul

    2012-10-01

    The spheromak and astrophysical jet plasma at Caltech emits localized EUV and X-rays associated with magnetic reconnection. However, conventional optics does not work for EUV or X-rays due to their high energy. Coded aperture imaging is an alternative method that will work at these energies. The technique has been used in spacecraft for high-energy radiation and also in nuclear medicine. Coded aperture imaging works by having patterns of materials opaque to various wavelengths block and unblock radiation in a known pattern. The original image can be determined from a numerical procedure that inverts information from the coded shadow on the detector plane. A one-dimensional coded mask has been designed and constructed for visualization of the evolution of a 1-d cross-section image of the Caltech plasmas. The mask is constructed from Hadamard matrices. Arrays of photo-detectors will be assembled to obtain an image of the plasmas in the visible light range. The experiment will ultimately be re-configured to image X-ray and EUV radiation.

  13. Adaptive Full Aperture Wavefront Sensor Study

    NASA Technical Reports Server (NTRS)

    Robinson, William G.

    1997-01-01

    This grant and the work described was in support of a Seven Segment Demonstrator (SSD) and review of wavefront sensing techniques proposed by the Government and Contractors for the Next Generation Space Telescope (NGST) Program. A team developed the SSD concept. For completeness, some of the information included in this report has also been included in the final report of a follow-on contract (H-27657D) entitled "Construction of Prototype Lightweight Mirrors". The original purpose of this GTRI study was to investigate how various wavefront sensing techniques might be most effectively employed with large (greater than 10 meter) aperture space based telescopes used for commercial and scientific purposes. However, due to changes in the scope of the work performed on this grant and in light of the initial studies completed for the NGST program, only a portion of this report addresses wavefront sensing techniques. The wavefront sensing techniques proposed by the Government and Contractors for the NGST were summarized in proposals and briefing materials developed by three study teams including NASA Goddard Space Flight Center, TRW, and Lockheed-Martin. In this report, GTRI reviews these approaches and makes recommendations concerning the approaches. The objectives of the SSD were to demonstrate functionality and performance of a seven segment prototype array of hexagonal mirrors and supporting electromechanical components which address design issues critical to space optics deployed in large space based telescopes for astronomy and for optics used in spaced based optical communications systems. The SSD was intended to demonstrate technologies which can support the following capabilities: Transportation in dense packaging to existing launcher payload envelopes, then deployable on orbit to form a space telescope with large aperture. Provide very large (greater than 10 meters) primary reflectors of low mass and cost. Demonstrate the capability to form a segmented primary or

  14. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  15. Coded aperture Fast Neutron Analysis: Latest design advances

    NASA Astrophysics Data System (ADS)

    Accorsi, Roberto; Lanza, Richard C.

    2001-07-01

    Past studies have showed that materials of concern like explosives or narcotics can be identified in bulk from their atomic composition. Fast Neutron Analysis (FNA) is a nuclear method capable of providing this information even when considerable penetration is needed. Unfortunately, the cross sections of the nuclear phenomena and the solid angles involved are typically small, so that it is difficult to obtain high signal-to-noise ratios in short inspection times. CAFNAaims at combining the compound specificity of FNA with the potentially high SNR of Coded Apertures, an imaging method successfully used in far-field 2D applications. The transition to a near-field, 3D and high-energy problem prevents a straightforward application of Coded Apertures and demands a thorough optimization of the system. In this paper, the considerations involved in the design of a practical CAFNA system for contraband inspection, its conclusions, and an estimate of the performance of such a system are presented as the evolution of the ideas presented in previous expositions of the CAFNA concept.

  16. Correlated Statistical Uncertainties in Coded-Aperture Imaging

    SciTech Connect

    Fleenor, Matthew C; Blackston, Matthew A; Ziock, Klaus-Peter

    2014-01-01

    In nuclear security applications, coded-aperture imagers provide the opportu- nity for a wealth of information regarding the attributes of both the radioac- tive and non-radioactive components of the objects being imaged. However, for optimum benefit to the community, spatial attributes need to be deter- mined in a quantitative and statistically meaningful manner. To address the deficiency of quantifiable errors in coded-aperture imaging, we present uncer- tainty matrices containing covariance terms between image pixels for MURA mask patterns. We calculated these correlated uncertainties as functions of variation in mask rank, mask pattern over-sampling, and whether or not anti- mask data are included. Utilizing simulated point source data, we found that correlations (and inverse correlations) arose when two or more image pixels were summed. Furthermore, we found that the presence of correlations (and their inverses) was heightened by the process of over-sampling, while correla- tions were suppressed by the inclusion of anti-mask data and with increased mask rank. As an application of this result, we explore how statistics-based alarming in nuclear security is impacted.

  17. Development of the strontium iodide coded aperture (SICA) instrument

    NASA Astrophysics Data System (ADS)

    Mitchell, Lee J.; Phlips, Bernard F.; Grove, J. Eric; Cordes, Ryan

    2015-08-01

    The work reports on the development of a Strontium Iodide Coded Aperture (SICA) instrument for use in space-based astrophysics, solar physics, and high-energy atmospheric physics. The Naval Research Laboratory is developing a prototype coded aperture imager that will consist of an 8 x 8 array of SrI2:Eu detectors, each read out by a silicon photomultiplier. The array would be used to demonstrate SrI2:Eu detector performance for space-based missions. Europium-doped strontium iodide (SrI2:Eu) detectors have recently become available, and the material is a strong candidate to replace existing detector technology currently used for space-based gamma-ray astrophysics research. The detectors have a typical energy resolution of 3.2% at 662 keV, a significant improvement over the 6.5% energy resolution of thallium-doped sodium iodide. With a density of 4.59 g/cm and a Zeff of 49, SrI2:Eu has a high efficiency for MeV gamma-ray detection. Coupling this with recent improvements in silicon photomultiplier technology (i.e., no bulky photomultiplier tubes) creates high-density, large-area, low-power detector arrays with good energy resolution. Also, the energy resolution of SrI2:Eu makes it ideal for use as the back plane of a Compton telescope.

  18. A thermal neutron source imager using coded apertures

    SciTech Connect

    Vanier, P.E.; Forman, L.; Selcow, E.C.

    1995-08-01

    To facilitate the process of re-entry vehicle on-site inspections, it would be useful to have an imaging technique which would allow the counting of deployed multiple nuclear warheads without significant disassembly of a missile`s structure. Since neutrons cannot easily be shielded without massive amounts of materials, they offer a means of imaging the separate sources inside a sealed vehicle. Thermal neutrons carry no detailed spectral information, so their detection should not be as intrusive as gamma ray imaging. A prototype device for imaging at close range with thermal neutrons has been constructed using an array of {sup 3}He position-sensitive gas proportional counters combined with a uniformly redundant coded aperture array. A sealed {sup 252}Cf source surrounded by a polyethylene moderator is used as a test source. By means of slit and pinhole experiments, count rates of image-forming neutrons (those which cast a shadow of a Cd aperture on the detector) are compared with the count rates for background neutrons. The resulting ratio, which limits the available image contrast, is measured as a function of distance from the source. The envelope of performance of the instrument is defined by the contrast ratio, the angular resolution, and the total count rate as a function of distance from the source. These factors will determine whether such an instrument could be practical as a tool for treaty verification.

  19. Implementation of Hadamard spectroscopy using MOEMS as a coded aperture

    NASA Astrophysics Data System (ADS)

    Vasile, T.; Damian, V.; Coltuc, D.; Garoi, F.; Udrea, C.

    2015-02-01

    Although nowadays spectrometers reached a high level of performance, output signals are often weak and traditional slit spectrometers still confronts the problem of poor optical throughput, minimizing their efficiency in low light setup conditions. In order to overcome these issues, Hadamard Spectroscopy (HS) was implemented in a conventional Ebert Fastie type of spectrometer setup, by substituting the exit slit with a digital micro-mirror device (DMD) who acts like a coded aperture. The theory behind HS and the functionality of the DMD are presented. The improvements brought using HS are enlightened by means of a spectrometric experiment and higher SNR spectrum is acquired. Comparative experiments were conducted in order to emphasize the SNR differences between HS and scanning slit method. Results provide a SNR gain of 3.35 favoring HS. One can conclude the HS method effectiveness to be a great asset for low light spectrometric experiments.

  20. Snapshot fan beam coded aperture coherent scatter tomography.

    PubMed

    Hassan, Mehadi; Greenberg, Joel A; Odinaka, Ikenna; Brady, David J

    2016-08-01

    We use coherently scattered X-rays to measure the molecular composition of an object throughout its volume. We image a planar slice of the object in a single snapshot by illuminating it with a fan beam and placing a coded aperture between the object and the detectors. We characterize the system and demonstrate a resolution of 13 mm in range and 2 mm in cross-range and a fractional momentum transfer resolution of 15%. In addition, we show that this technique allows a 100x speedup compared to previously-studied pencil beam systems using the same components. Finally, by scanning an object through the beam, we image the full 4-dimensional data cube (3 spatial and 1 material dimension) for complete volumetric molecular imaging.

  1. Snapshot fan beam coded aperture coherent scatter tomography.

    PubMed

    Hassan, Mehadi; Greenberg, Joel A; Odinaka, Ikenna; Brady, David J

    2016-08-01

    We use coherently scattered X-rays to measure the molecular composition of an object throughout its volume. We image a planar slice of the object in a single snapshot by illuminating it with a fan beam and placing a coded aperture between the object and the detectors. We characterize the system and demonstrate a resolution of 13 mm in range and 2 mm in cross-range and a fractional momentum transfer resolution of 15%. In addition, we show that this technique allows a 100x speedup compared to previously-studied pencil beam systems using the same components. Finally, by scanning an object through the beam, we image the full 4-dimensional data cube (3 spatial and 1 material dimension) for complete volumetric molecular imaging. PMID:27505791

  2. Hybrid Compton camera/coded aperture imaging system

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M.

    2012-04-10

    A system in one embodiment includes an array of radiation detectors; and an array of imagers positioned behind the array of detectors relative to an expected trajectory of incoming radiation. A method in another embodiment includes detecting incoming radiation with an array of radiation detectors; detecting the incoming radiation with an array of imagers positioned behind the array of detectors relative to a trajectory of the incoming radiation; and performing at least one of Compton imaging using at least the imagers and coded aperture imaging using at least the imagers. A method in yet another embodiment includes detecting incoming radiation with an array of imagers positioned behind an array of detectors relative to a trajectory of the incoming radiation; and performing Compton imaging using at least the imagers.

  3. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  4. A novel approach to correct the coded aperture misalignment for fast neutron imaging

    SciTech Connect

    Zhang, F. N.; Hu, H. S. Wang, D. M.; Jia, J.; Zhang, T. K.; Jia, Q. G.

    2015-12-15

    Aperture alignment is crucial for the diagnosis of neutron imaging because it has significant impact on the coding imaging and the understanding of the neutron source. In our previous studies on the neutron imaging system with coded aperture for large field of view, “residual watermark,” certain extra information that overlies reconstructed image and has nothing to do with the source is discovered if the peak normalization is employed in genetic algorithms (GA) to reconstruct the source image. Some studies on basic properties of residual watermark indicate that the residual watermark can characterize coded aperture and can thus be used to determine the location of coded aperture relative to the system axis. In this paper, we have further analyzed the essential conditions for the existence of residual watermark and the requirements of the reconstruction algorithm for the emergence of residual watermark. A gamma coded imaging experiment has been performed to verify the existence of residual watermark. Based on the residual watermark, a correction method for the aperture misalignment has been studied. A multiple linear regression model of the position of coded aperture axis, the position of residual watermark center, and the gray barycenter of neutron source with twenty training samples has been set up. Using the regression model and verification samples, we have found the position of the coded aperture axis relative to the system axis with an accuracy of approximately 20 μm. Conclusively, a novel approach has been established to correct the coded aperture misalignment for fast neutron coded imaging.

  5. Adaptive Matching of the Scanning Aperture of the Environment Parameter

    NASA Astrophysics Data System (ADS)

    Choni, Yu. I.; Yunusov, N. N.

    2016-04-01

    We analyze a matching system for the scanning aperture antenna radiating through a layer with unpredictably changing parameters. Improved matching has been achieved by adaptive motion of a dielectric plate in the gap between the aperture and the radome. The system is described within the framework of an infinite layered structure. The validity of the model has been confirmed by numerical simulation using CST Microwave Studio software and by an experiment. It is shown that the reflection coefficient at the input of some types of a matching device, which is due to the deviation of the load impedance from the nominal value, is determined by a compact and versatile formula. The potential efficiency of the proposed matching system is shown by a specific example, and its dependence on the choice of the starting position of the dielectric plate is demonstrated.

  6. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  7. Event Localization in Bulk Scintillator Crystals Using Coded Apertures

    SciTech Connect

    Ziock, Klaus-Peter; Braverman, Joshua B.; Fabris, Lorenzo; Harrison, Mark J.; Hornback, Donald Eric; Newby, Jason

    2015-06-01

    The localization of radiation interactions in bulk scintillators is generally limited by the size of the light distribution at the readout surface of the crystal/light-pipe system. By finding the centroid of the light spot, which is typically of order centimeters across, practical single-event localization is limited to ~2 mm/cm of crystal thickness. Similar resolution can also be achieved for the depth of interaction by measuring the size of the light spot. Through the use of near-field coded-aperture techniques applied to the scintillation light, light transport simulations show that for 3-cm-thick crystals, more than a five-fold improvement (millimeter spatial resolution) can be achieved both laterally and in event depth. At the core of the technique is the requirement to resolve the shadow from an optical mask placed in the scintillation light path between the crystal and the readout. In this paper, experimental results are presented that demonstrate the overall concept using a 1D shadow mask, a thin-scintillator crystal and a light pipe of varying thickness to emulate a 2.2-cm-thick crystal. Spatial resolutions of ~ 1 mm in both depth and transverse to the readout face are obtained over most of the crystal depth.

  8. Mask design and fabrication in coded aperture imaging

    NASA Astrophysics Data System (ADS)

    Shutler, Paul M. E.; Springham, Stuart V.; Talebitaher, Alireza

    2013-05-01

    We introduce the new concept of a row-spaced mask, where a number of blank rows are interposed between every pair of adjacent rows of holes of a conventional cyclic difference set based coded mask. At the cost of a small loss in signal-to-noise ratio, this can substantially reduce the number of holes required to image extended sources, at the same time increasing mask strength uniformly across the aperture, as well as making the mask automatically self-supporting. We also show that the Finger and Prince construction can be used to wrap any cyclic difference set onto a two-dimensional mask, regardless of the number of its pixels. We use this construction to validate by means of numerical simulations not only the performance of row-spaced masks, but also the pixel padding technique introduced by in 't Zand. Finally, we provide a computer program CDSGEN.EXE which, on a fast modern computer and for any Singer set of practical size and open fraction, generates the corresponding pattern of holes in seconds.

  9. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  10. Analysis for simplified optics coma effection on spectral image inversion of coded aperture spectral imager

    NASA Astrophysics Data System (ADS)

    Liu, Yangyang; Lv, Qunbo; Li, Weiyan; Xiangli, Bin

    2015-09-01

    As a novel spectrum imaging technology was developed recent years, push-broom coded aperture spectral imaging (PCASI) has the advantages of high throughput, high SNR, high stability etc. This coded aperture spectral imaging utilizes fixed code templates and push-broom mode, which can realize the high-precision reconstruction of spatial and spectral information. But during optical lens designing, manufacturing and debugging, it is inevitably exist some minor coma errors. Even minor coma errors can reduce image quality. In this paper, we simulated the system optical coma error's influence to the quality of reconstructed image, analyzed the variant of the coded aperture in different optical coma effect, then proposed an accurate curve of image quality and optical coma quality in 255×255 size code template, which provide important references for design and development of push-broom coded aperture spectrometer.

  11. Direct aperture optimization for online adaptive radiation therapy

    SciTech Connect

    Mestrovic, Ante; Milette, Marie-Pierre; Nichol, Alan; Clark, Brenda G.; Otto, Karl

    2007-05-15

    This paper is the first investigation of using direct aperture optimization (DAO) for online adaptive radiation therapy (ART). A geometrical model representing the anatomy of a typical prostate case was created. To simulate interfractional deformations, four different anatomical deformations were created by systematically deforming the original anatomy by various amounts (0.25, 0.50, 0.75, and 1.00 cm). We describe a series of techniques where the original treatment plan was adapted in order to correct for the deterioration of dose distribution quality caused by the anatomical deformations. We found that the average time needed to adapt the original plan to arrive at a clinically acceptable plan is roughly half of the time needed for a complete plan regeneration, for all four anatomical deformations. Furthermore, through modification of the DAO algorithm the optimization search space was reduced and the plan adaptation was significantly accelerated. For the first anatomical deformation (0.25 cm), the plan adaptation was six times more efficient than the complete plan regeneration. For the 0.50 and 0.75 cm deformations, the optimization efficiency was increased by a factor of roughly 3 compared to the complete plan regeneration. However, for the anatomical deformation of 1.00 cm, the reduction of the optimization search space during plan adaptation did not result in any efficiency improvement over the original (nonmodified) plan adaptation. The anatomical deformation of 1.00 cm demonstrates the limit of this approach. We propose an innovative approach to online ART in which the plan adaptation and radiation delivery are merged together and performed concurrently--adaptive radiation delivery (ARD). A fundamental advantage of ARD is the fact that radiation delivery can start almost immediately after image acquisition and evaluation. Most of the original plan adaptation is done during the radiation delivery, so the time spent adapting the original plan does not

  12. A dual-sided coded-aperture radiation detection system

    NASA Astrophysics Data System (ADS)

    Penny, R. D.; Hood, W. E.; Polichar, R. M.; Cardone, F. H.; Chavez, L. G.; Grubbs, S. G.; Huntley, B. P.; Kuharski, R. A.; Shyffer, R. T.; Fabris, L.; Ziock, K. P.; Labov, S. E.; Nelson, K.

    2011-10-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5×5×50 cm 3 cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  13. Far field 3D localization of radioactive hot spots using a coded aperture camera.

    PubMed

    Shifeng, Sun; Zhiming, Zhang; Lei, Shuai; Daowu, Li; Yingjie, Wang; Yantao, Liu; Xianchao, Huang; Haohui, Tang; Ting, Li; Pei, Chai; Yiwen, Zhang; Wei, Zhou; Mingjie, Yang; Cunfeng, Wei; Chuangxin, Ma; Long, Wei

    2016-01-01

    This paper presents a coded aperture method to remotely estimate the radioactivity of a source. The activity is estimated from the detected counts and the estimated source location, which is extracted by factoring the effect of aperture magnification. A 6mm thick tungsten-copper alloy coded aperture mask is used to modulate the incoming gamma-rays. The location of point and line sources in all three dimensions was estimated with an accuracy of less than 10% when the source-camera distance was about 4 m. The estimated activities were 17.6% smaller and 50.4% larger than the actual activities for the point and line sources, respectively.

  14. Adaptive differential pulse-code modulation with adaptive bit allocation

    NASA Astrophysics Data System (ADS)

    Frangoulis, E. D.; Yoshida, K.; Turner, L. F.

    1984-08-01

    Studies have been conducted regarding the possibility to obtain good quality speech at data rates in the range from 16 kbit/s to 32 kbit/s. The techniques considered are related to adaptive predictive coding (APC) and adaptive differential pulse-code modulation (ADPCM). At 16 kbit/s adaptive transform coding (ATC) has also been used. The present investigation is concerned with a new method of speech coding. The described method employs adaptive bit allocation, similar to that used in adaptive transform coding, together with adaptive differential pulse-code modulation, employing first-order prediction. The new method has the objective to improve the quality of the speech over that which can be obtained with conventional ADPCM employing a fourth-order predictor. Attention is given to the ADPCM-AB system, the design of a subjective test, and the application of switched preemphasis to ADPCM.

  15. Augmenting synthetic aperture radar with space time adaptive processing

    NASA Astrophysics Data System (ADS)

    Riedl, Michael; Potter, Lee C.; Ertin, Emre

    2013-05-01

    Wide-area persistent radar video offers the ability to track moving targets. A shortcoming of the current technology is an inability to maintain track when Doppler shift places moving target returns co-located with strong clutter. Further, the high down-link data rate required for wide-area imaging presents a stringent system bottleneck. We present a multi-channel approach to augment the synthetic aperture radar (SAR) modality with space time adaptive processing (STAP) while constraining the down-link data rate to that of a single antenna SAR system. To this end, we adopt a multiple transmit, single receive (MISO) architecture. A frequency division design for orthogonal transmit waveforms is presented; the approach maintains coherence on clutter, achieves the maximal unaliased band of radial velocities, retains full resolution SAR images, and requires no increase in receiver data rate vis-a-vis the wide-area SAR modality. For Nt transmit antennas and N samples per pulse, the enhanced sensing provides a STAP capability with Nt times larger range bins than the SAR mode, at the cost of O(log N) more computations per pulse. The proposed MISO system and the associated signal processing are detailed, and the approach is numerically demonstrated via simulation of an airborne X-band system.

  16. MTF analysis for coded aperture imaging in a flat panel display

    NASA Astrophysics Data System (ADS)

    Suh, Sungjoo; Han, Jae-Joon; Park, Dusik

    2014-09-01

    In this paper, we analyze the modulation transfer function (MTF) of coded aperture imaging in a flat panel display. The flat panel display with a sensor panel forms lens-less multi-view cameras through the imaging pattern of the modified redundant arrays (MURA) on the display panel. To analyze the MTF of the coded aperture imaging implemented on the display panel, we first mathematically model the encoding process of coded aperture imaging, where the projected image on the sensor panel is modeled as a convolution of the scaled object and a function of the imaging pattern. Then, system point spread function is determined by incorporating a decoding process which is dependent on the pixel pitch of the display screen and the decoding function. Finally, the MTF of the system is derived by the magnitude of the Fourier transform of the determined system point spread function. To demonstrate the validity of the mathematically derived MTF in the system, we build a coded aperture imaging system that can capture the scene in front of the display, where the system consists of a display screen and a sensor panel. Experimental results show that the derived MTF of coded aperture imaging in a flat panel display system well corresponds to the measured MTF.

  17. Order of Magnitude Signal Gain in Magnetic Sector Mass Spectrometry Via Aperture Coding.

    PubMed

    Chen, Evan X; Russell, Zachary E; Amsden, Jason J; Wolter, Scott D; Danell, Ryan M; Parker, Charles B; Stoner, Brian R; Gehm, Michael E; Glass, Jeffrey T; Brady, David J

    2015-09-01

    Miniaturizing instruments for spectroscopic applications requires the designer to confront a tradeoff between instrument resolution and instrument throughput [and associated signal-to-background-ratio (SBR)]. This work demonstrates a solution to this tradeoff in sector mass spectrometry by the first application of one-dimensional (1D) spatially coded apertures, similar to those previously demonstrated in optics. This was accomplished by replacing the input slit of a simple 90° magnetic sector mass spectrometer with a specifically designed coded aperture, deriving the corresponding forward mathematical model and spectral reconstruction algorithm, and then utilizing the resulting system to measure and reconstruct the mass spectra of argon, acetone, and ethanol. We expect the application of coded apertures to sector instrument designs will lead to miniature mass spectrometers that maintain the high performance of larger instruments, enabling field detection of trace chemicals and point-of-use mass spectrometry. PMID:26111517

  18. Coded aperture imaging - Predicted performance of uniformly redundant arrays

    NASA Technical Reports Server (NTRS)

    Fenimore, E. E.

    1978-01-01

    It is noted that uniformly redundant arrays (URAs) have autocorrelation functions with perfectly flat sidelobes. A generalized signal-to-noise equation has been developed to predict URA performance. The signal-to-noise value is formulated as a function of aperture transmission or density, the ratio of the intensity of a resolution element to the integrated source intensity, and the ratio of detector background noise to the integrated intensity. It is shown that the only two-dimensional URAs known have a transmission of one half. This is not a great limitation because a nonoptimum transmission of one half never reduces the signal-to-noise ratio more than 30%. The reconstructed URA image contains practically uniform noise, regardless of the object structure. URA's improvement over the single-pinhole camera is much larger for high-intensity points than for low-intensity points.

  19. A coded aperture imaging system optimized for hard X-ray and gamma ray astronomy

    NASA Technical Reports Server (NTRS)

    Gehrels, N.; Cline, T. L.; Huters, A. F.; Leventhal, M.; Maccallum, C. J.; Reber, J. D.; Stang, P. D.; Teegarden, B. J.; Tueller, J.

    1985-01-01

    A coded aperture imaging system was designed for the Gamma-Ray imaging spectrometer (GRIS). The system is optimized for imaging 511 keV positron-annihilation photons. For a galactic center 511-keV source strength of 0.001 sq/s, the source location accuracy is expected to be + or - 0.2 deg.

  20. Experimental implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2015-03-01

    A fast and accurate scatter imaging technique to differentiate cancerous and healthy breast tissue is introduced in this work. Such a technique would have wide-ranging clinical applications from intra-operative margin assessment to breast cancer screening. Coherent Scatter Computed Tomography (CSCT) has been shown to differentiate cancerous from healthy tissue, but the need to raster scan a pencil beam at a series of angles and slices in order to reconstruct 3D images makes it prohibitively time consuming. In this work we apply the coded aperture coherent scatter spectral imaging technique to reconstruct 3D images of breast tissue samples from experimental data taken without the rotation usually required in CSCT. We present our experimental implementation of coded aperture scatter imaging, the reconstructed images of the breast tissue samples and segmentations of the 3D images in order to identify the cancerous and healthy tissue inside of the samples. We find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside of them. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside of ex vivo samples within a time on the order of a minute.

  1. A new pad-based neutron detector for stereo coded aperture thermal neutron imaging

    NASA Astrophysics Data System (ADS)

    Dioszegi, I.; Yu, B.; Smith, G.; Schaknowski, N.; Fried, J.; Vanier, P. E.; Salwen, C.; Forman, L.

    2014-09-01

    A new coded aperture thermal neutron imager system has been developed at Brookhaven National Laboratory. The cameras use a new type of position-sensitive 3He-filled ionization chamber, in which an anode plane is composed of an array of pads with independent acquisition channels. The charge is collected on each of the individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The new design has several advantages for coded-aperture imaging applications in the field, compared to the previous generation of wire-grid based neutron detectors. Among these are its rugged design, lighter weight and use of non-flammable stopping gas. The pad-based readout occurs in parallel circuits, making it capable of high count rates, and also suitable to perform data analysis and imaging on an event-by-event basis. The spatial resolution of the detector can be better than the pixel size by using a charge sharing algorithm. In this paper we will report on the development and performance of the new pad-based neutron camera, describe a charge sharing algorithm to achieve sub-pixel spatial resolution and present the first stereoscopic coded aperture images of thermalized neutron sources using the new coded aperture thermal neutron imager system.

  2. Domain and range decomposition methods for coded aperture x-ray coherent scatter imaging

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Kaganovsky, Yan; O'Sullivan, Joseph A.; Politte, David G.; Holmgren, Andrew D.; Greenberg, Joel A.; Carin, Lawrence; Brady, David J.

    2016-05-01

    Coded aperture X-ray coherent scatter imaging is a novel modality for ascertaining the molecular structure of an object. Measurements from different spatial locations and spectral channels in the object are multiplexed through a radiopaque material (coded aperture) onto the detectors. Iterative algorithms such as penalized expectation maximization (EM) and fully separable spectrally-grouped edge-preserving reconstruction have been proposed to recover the spatially-dependent coherent scatter spectral image from the multiplexed measurements. Such image recovery methods fall into the category of domain decomposition methods since they recover independent pieces of the image at a time. Ordered subsets has also been utilized in conjunction with penalized EM to accelerate its convergence. Ordered subsets is a range decomposition method because it uses parts of the measurements at a time to recover the image. In this paper, we analyze domain and range decomposition methods as they apply to coded aperture X-ray coherent scatter imaging using a spectrally-grouped edge-preserving regularizer and discuss the implications of the increased availability of parallel computational architecture on the choice of decomposition methods. We present results of applying the decomposition methods on experimental coded aperture X-ray coherent scatter measurements. Based on the results, an underlying observation is that updating different parts of the image or using different parts of the measurements in parallel, decreases the rate of convergence, whereas using the parts sequentially can accelerate the rate of convergence.

  3. SU-E-J-20: Adaptive Aperture Morphing for Online Correction for Prostate Cancer Radiotherapy

    SciTech Connect

    Sandhu, R; Qin, A; Yan, D

    2014-06-01

    Purpose: Online adaptive aperture morphing is desirable over translational couch shifts to accommodate not only the target position variation but also anatomic changes (rotation, deformation, and relation of target to organ-atrisks). We proposed quick and reliable method for adapting segment aperture leaves for IMRT treatment of prostate. Methods: The proposed method consists of following steps: (1) delineate the contours of prostate, SV, bladder and rectum on kV-CBCT; (2) determine prostate displacement from the rigid body registration of the contoured prostate manifested on the reference CT and the CBCT; (3) adapt the MLC segment apertures obtained from the pre-treatment IMRT planning to accommodate the shifts as well as anatomic changes. The MLC aperture adaptive algorithm involves two steps; first move the whole aperture according to prostate translational/rotational shifts, and secondly fine-tune the aperture shape to maintain the spatial relationship between the planning target contour and the MLC aperture to the daily target contour. Feasibility of this method was evaluated retrospectively on a seven-field IMRT treatment of prostate cancer patient by comparing dose volume histograms of the original plan and the aperture-adjusted plan, with/without additional segments weight optimization (SWO), on two daily treatment CBCTs selected with relative large motion and rotation. Results: For first daily treatment, the prostate rotation was significant (12degree around lateral-axis). With apertureadjusted plan, the D95 to the target was improved 25% and rectum dose (D30, D40) was reduced 20% relative to original plan on daily volumes. For second treatment-fraction, (lateral shift = 6.7mm), after adjustment target D95 improved by 3% and bladder dose (D30, maximum dose) was reduced by 1%. For both cases, extra SWO did not provide significant improvement. Conclusion: The proposed method of adapting segment apertures is promising in treatment position correction

  4. 3-D localization of gamma ray sources with coded apertures for medical applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Karafasoulis, K.; Potiriadis, C.; Lambropoulos, C. P.

    2015-09-01

    Several small gamma cameras for radioguided surgery using CdTe or CdZnTe have parallel or pinhole collimators. Coded aperture imaging is a well-known method for gamma ray source directional identification, applied in astrophysics mainly. The increase in efficiency due to the substitution of the collimators by the coded masks renders the method attractive for gamma probes used in radioguided surgery. We have constructed and operationally verified a setup consisting of two CdTe gamma cameras with Modified Uniform Redundant Array (MURA) coded aperture masks of rank 7 and 19 and a video camera. The 3-D position of point-like radioactive sources is estimated via triangulation using decoded images acquired by the gamma cameras. We have also developed code for both fast and detailed simulations and we have verified the agreement between experimental results and simulations. In this paper we present a simulation study for the spatial localization of two point sources using coded aperture masks with rank 7 and 19.

  5. Adaptive predictive image coding using local characteristics

    NASA Astrophysics Data System (ADS)

    Hsieh, C. H.; Lu, P. C.; Liou, W. G.

    1989-12-01

    The paper presents an efficient adaptive predictive coding method using the local characteristics of images. In this method, three coding schemes, namely, mean, subsampling combined with fixed DPCM, and ADPCM/PCM, are used and one of these is chosen adaptively based on the local characteristics of images. The prediction parameters of the two-dimensional linear predictor in the ADPCM/PCM are extracted on a block by block basis. Simulation results show that the proposed method is effective in reducing the slope overload distortion and the granular noise at low bit rates, and thus it can improve the visual quality of reconstructed images.

  6. Phase Contrast Imaging with Coded Apertures Using Laboratory-Based X-ray Sources

    SciTech Connect

    Ignatyev, K.; Munro, P. R. T.; Speller, R. D.; Olivo, A.

    2011-09-09

    X-ray phase contrast imaging is a powerful technique that allows detection of changes in the phase of x-ray wavefronts as they pass through a sample. As a result, details not visible in conventional x-ray absorption imaging can be detected. Until recently the majority of applications of phase contrast imaging were at synchrotron facilities due to the availability of their high flux and coherence; however, a number of techniques have appeared recently that allow phase contrast imaging to be performed using laboratory sources. Here we describe a phase contrast imaging technique, developed at University College London, that uses two coded apertures. The x-ray beam is shaped by the pre-sample aperture, and small deviations in the x-ray propagation direction are detected with the help of the detector aperture. In contrast with other methods, it has a much more relaxed requirement for the source size (it works with source sizes up to 100 {mu}m). A working prototype coded-aperture system has been built. An x-ray detector with directly deposited columnar CsI has been used to minimize signal spill-over into neighboring pixels. Phase contrast images obtained with the system have demonstrated its effectiveness for imaging low-absorption materials.

  7. Source-Search Sensitivity of a Large-Area, Coded-Aperture, Gamma-Ray Imager

    SciTech Connect

    Ziock, K P; Collins, J W; Craig, W W; Fabris, L; Lanza, R C; Gallagher, S; Horn, B P; Madden, N W; Smith, E; Woodring, M L

    2004-10-27

    We have recently completed a large-area, coded-aperture, gamma-ray imager for use in searching for radiation sources. The instrument was constructed to verify that weak point sources can be detected at considerable distances if one uses imaging to overcome fluctuations in the natural background. The instrument uses a rank-19, one-dimensional coded aperture to cast shadow patterns onto a 0.57 m{sup 2} NaI(Tl) detector composed of 57 individual cubes each 10 cm on a side. These are arranged in a 19 x 3 array. The mask is composed of four-centimeter thick, one-meter high, 10-cm wide lead blocks. The instrument is mounted in the back of a small truck from which images are obtained as one drives through a region. Results of first measurements obtained with the system are presented.

  8. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths. PMID:25321062

  9. Measurements with Pinhole and Coded Aperture Gamma-Ray Imaging Systems

    SciTech Connect

    Raffo-Caiado, Ana Claudia; Solodov, Alexander A; Abdul-Jabbar, Najeb M; Hayward, Jason P; Ziock, Klaus-Peter

    2010-01-01

    From a safeguards perspective, gamma-ray imaging has the potential to reduce manpower and cost for effectively locating and monitoring special nuclear material. The purpose of this project was to investigate the performance of pinhole and coded aperture gamma-ray imaging systems at Oak Ridge National Laboratory (ORNL). With the aid of the European Commission Joint Research Centre (JRC), radiometric data will be combined with scans from a three-dimensional design information verification (3D-DIV) system. Measurements were performed at the ORNL Safeguards Laboratory using sources that model holdup in radiological facilities. They showed that for situations with moderate amounts of solid or dense U sources, the coded aperture was able to predict source location and geometry within ~7% of actual values while the pinhole gave a broad representation of source distributions

  10. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-01

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.

  11. Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application

    DOEpatents

    Fenimore, E.E.

    1980-09-26

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.

  12. Studies of coded aperture gamma-ray optics using an Anger camera

    NASA Astrophysics Data System (ADS)

    Charalambous, P. M.; Dean, A. J.; Stephen, J. B.; Young, N. G. S.; Gourlay, A. R.

    1983-08-01

    An experimental arrangement using an Anger camera as a position-sensitive focal plane, in conjunction with a series of coded aperture masks, has been employed to generate laboratory gamma-ray images. These tests were designed to investigate quantitatively a number of potential aberrations present in any practicable imaging system. It is shown that, by proper design, the major sources of image defects may be reduced to a level compatible with the production of good quality gamma-ray sky images.

  13. Coded aperture coherent scatter imaging for breast cancer detection: a Monte Carlo evaluation

    NASA Astrophysics Data System (ADS)

    Lakshmanan, Manu N.; Morris, Robert E.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-03-01

    It is known that conventional x-ray imaging provides a maximum contrast between cancerous and healthy fibroglandular breast tissues of 3% based on their linear x-ray attenuation coefficients at 17.5 keV, whereas coherent scatter signal provides a maximum contrast of 19% based on their differential coherent scatter cross sections. Therefore in order to exploit this potential contrast, we seek to evaluate the performance of a coded- aperture coherent scatter imaging system for breast cancer detection and investigate its accuracy using Monte Carlo simulations. In the simulations we modeled our experimental system, which consists of a raster-scanned pencil beam of x-rays, a bismuth-tin coded aperture mask comprised of a repeating slit pattern with 2-mm periodicity, and a linear-array of 128 detector pixels with 6.5-keV energy resolution. The breast tissue that was scanned comprised a 3-cm sample taken from a patient-based XCAT breast phantom containing a tomosynthesis- based realistic simulated lesion. The differential coherent scatter cross section was reconstructed at each pixel in the image using an iterative reconstruction algorithm. Each pixel in the reconstructed image was then classified as being either air or the type of breast tissue with which its normalized reconstructed differential coherent scatter cross section had the highest correlation coefficient. Comparison of the final tissue classification results with the ground truth image showed that the coded aperture imaging technique has a cancerous pixel detection sensitivity (correct identification of cancerous pixels), specificity (correctly ruling out healthy pixels as not being cancer) and accuracy of 92.4%, 91.9% and 92.0%, respectively. Our Monte Carlo evaluation of our experimental coded aperture coherent scatter imaging system shows that it is able to exploit the greater contrast available from coherently scattered x-rays to increase the accuracy of detecting cancerous regions within the breast.

  14. Lensless coded-aperture imaging with separable Doubly-Toeplitz masks

    NASA Astrophysics Data System (ADS)

    DeWeert, Michael J.; Farm, Brian P.

    2015-02-01

    In certain imaging applications, conventional lens technology is constrained by the lack of materials which can effectively focus the radiation within a reasonable weight and volume. One solution is to use coded apertures-opaque plates perforated with multiple pinhole-like openings. If the openings are arranged in an appropriate pattern, then the images can be decoded and a clear image computed. Recently, computational imaging and the search for a means of producing programmable software-defined optics have revived interest in coded apertures. The former state-of-the-art masks, modified uniformly redundant arrays (MURAs), are effective for compact objects against uniform backgrounds, but have substantial drawbacks for extended scenes: (1) MURAs present an inherently ill-posed inversion problem that is unmanageable for large images, and (2) they are susceptible to diffraction: a diffracted MURA is no longer a MURA. We present a new class of coded apertures, separable Doubly-Toeplitz masks, which are efficiently decodable even for very large images-orders of magnitude faster than MURAs, and which remain decodable when diffracted. We implemented the masks using programmable spatial-light-modulators. Imaging experiments confirmed the effectiveness of separable Doubly-Toeplitz masks-images collected in natural light of extended outdoor scenes are rendered clearly.

  15. Optimizing the search for high-z GRBs:. the JANUS X-ray coded aperture telescope

    NASA Astrophysics Data System (ADS)

    Burrows, D. N.; Fox, D.; Palmer, D.; Romano, P.; Mangano, V.; La Parola, V.; Falcone, A. D.; Roming, P. W. A.

    We discuss the optimization of gamma-ray burst (GRB) detectors with a goal of maximizing the detected number of bright high-redshift GRBs, in the context of design studies conducted for the X-ray transient detector on the JANUS mission. We conclude that the optimal energy band for detection of high-z GRBs is below about 30 keV. We considered both lobster-eye and coded aperture designs operating in this energy band. Within the available mass and power constraints, we found that the coded aperture mask was preferred for the detection of high-z bursts with bright enough afterglows to probe galaxies in the era of the Cosmic Dawn. This initial conclusion was confirmed through detailed mission simulations that found that the selected design (an X-ray Coded Aperture Telescope) would detect four times as many bright, high-z GRBs as the lobster-eye design we considered. The JANUS XCAT instrument will detect 48 GRBs with z>5 and fluence S_x > 3 × 10-7 erg cm-2 in a two year mission.

  16. Detection optimization using linear systems analysis of a coded aperture laser sensor system

    SciTech Connect

    Gentry, S.M.

    1994-09-01

    Minimum detectable irradiance levels for a diffraction grating based laser sensor were calculated to be governed by clutter noise resulting from reflected earth albedo. Features on the earth surface caused pseudo-imaging effects on the sensor`s detector arras that resulted in the limiting noise in the detection domain. It was theorized that a custom aperture transmission function existed that would optimize the detection of laser sources against this clutter background. Amplitude and phase aperture functions were investigated. Compared to the diffraction grating technique, a classical Young`s double-slit aperture technique was investigated as a possible optimized solution but was not shown to produce a system that had better clutter-noise limited minimum detectable irradiance. Even though the double-slit concept was not found to have a detection advantage over the slit-grating concept, one interesting concept grew out of the double-slit design that deserved mention in this report, namely the Barker-coded double-slit. This diffractive aperture design possessed properties that significantly improved the wavelength accuracy of the double-slit design. While a concept was not found to beat the slit-grating concept, the methodology used for the analysis and optimization is an example of the application of optoelectronic system-level linear analysis. The techniques outlined here can be used as a template for analysis of a wide range of optoelectronic systems where the entire system, both optical and electronic, contribute to the detection of complex spatial and temporal signals.

  17. The use of an active coded aperture for improved directional measurements in high energy gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Johansson, A.; Beron, B. L.; Campbell, L.; Eichler, R.; Hofstadter, R.; Hughes, E. B.; Wilson, S.; Gorodetsky, P.

    1980-01-01

    The coded aperture, a refinement of the scatter-hole camera, offers a method for the improved measurement of gamma-ray direction in gamma-ray astronomy. Two prototype coded apertures have been built and tested. The more recent of these has 128 active elements of the heavy scintillator BGO. Results of tests for gamma-rays in the range 50-500 MeV are reported and future application in space discussed.

  18. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  19. Precision Imaging with Adaptive Optics Aperture Masking Interferometry

    NASA Astrophysics Data System (ADS)

    Martinache, F.; Lloyd, J. P.; Tuthill, P.; Woodruff, H. C.; ten Brummelaar, T.; Turner, N.

    2005-12-01

    Adaptive Optics (AO) enables sensitive diffraction limited imaging from the ground on large telescopes. Much of the promise of AO has yet to be fully realised, due to the difficulties imposed by the complicated, unstable and unknown PSF. At the highest resolutions, (inside the PSF) AO has yet to demonstrate full potential for improvements over speckle techniques. The most precise astronomical speckle imaging observations have resulted from non-redundant pupil masking. We are developing a technique to solve the problem of PSF characterization in AO imaging by synthesizing the heritage of image reconstruction with sparse pupil sampling from astronomical interferometry with the long coherence times available after AO correction. Masking the output pupil of the AO system with a non-redundant array can provide self-calibrated imaging. Further calibration of the MTF can be provided with AO wavefront sensor telemetry data. With a precision calibrated PSF, reliable, well-posed deconvolution is possible. High SNR data and accurate MTF calibration provided by the combination of non-redundant masking and AO system telemetry, allow super-resolution. AEOS provides a unique capability to explore the dynamic range and imaging precision of this technique at visible wavelengths. The NSF/AFOSR program has funded an instrument to explore these new imaging techniques at AEOS. ZOR/AO (Zero Optical Redundance with Adaptive Optics) is presently under construction, to be deployed at AEOS in 2005.

  20. Large Coded Aperture Mask for Spaceflight Hard X-ray Images

    NASA Technical Reports Server (NTRS)

    Vigneau, Danielle N.; Robinson, David W.

    2002-01-01

    The 2.6 square meter coded aperture mask is a vital part of the Burst Alert Telescope on the Swift mission. A random, but known pattern of more than 50,000 lead tiles, each 5 mm square, was bonded to a large honeycomb panel which projects a shadow on the detector array during a gamma ray burst. A two-year development process was necessary to explore ideas, apply techniques, and finalize procedures to meet the strict requirements for the coded aperture mask. Challenges included finding a honeycomb substrate with minimal gamma ray attenuation, selecting an adhesive with adequate bond strength to hold the tiles in place but soft enough to allow the tiles to expand and contract without distorting the panel under large temperature gradients, and eliminating excess adhesive from all untiled areas. The largest challenge was to find an efficient way to bond the > 50,000 lead tiles to the panel with positional tolerances measured in microns. In order to generate the desired bondline, adhesive was applied and allowed to cure to each tile. The pre-cured tiles were located in a tool to maintain positional accuracy, wet adhesive was applied to the panel, and it was lowered to the tile surface with synchronized actuators. Using this procedure, the entire tile pattern was transferred to the large honeycomb panel in a single bond. The pressure for the bond was achieved by enclosing the entire system in a vacuum bag. Thermal vacuum and acoustic tests validated this approach. This paper discusses the methods, materials, and techniques used to fabricate this very large and unique coded aperture mask for the Swift mission.

  1. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  2. Snapshot full-volume coded aperture x-ray diffraction tomography

    NASA Astrophysics Data System (ADS)

    Greenberg, Joel A.; Brady, David J.

    2016-05-01

    X-ray diffraction tomography (XRDT) is a well-established technique that makes it possible to identify the material composition of an object throughout its volume. We show that using coded apertures to structure the measured scatter signal gives rise to a family of imaging architectures than enables snapshot XRDT in up to 4-dimensions. We consider pencil, fan, and cone beam snapshot XRDT and show results from both experimental and simulation-based studies. We find that, while lower-dimensional systems typically result in higher imaging fidelity, higher-dimensional systems can perform adequately for a specific task at orders of magnitude faster scan times.

  3. Coded aperture correlation holography-a new type of incoherent digital holograms.

    PubMed

    Vijayakumar, A; Kashter, Yuval; Kelner, Roy; Rosen, Joseph

    2016-05-30

    We propose and demonstrate a new concept of incoherent digital holography termed coded aperture correlation holography (COACH). In COACH, the hologram of an object is formed by the interference of light diffracted from the object, with light diffracted from the same object, but that passes through a coded phase mask (CPM). Another hologram is recorded for a point object, under identical conditions and with the same CPM. This hologram is called the point spread function (PSF) hologram. The reconstructed image is obtained by correlating the object hologram with the PSF hologram. The image reconstruction of multiplane object using COACH was compared with that of other equivalent imaging systems, and has been found to possess a higher axial resolution compared to Fresnel incoherent correlation holography.

  4. Coded aperture correlation holography-a new type of incoherent digital holograms.

    PubMed

    Vijayakumar, A; Kashter, Yuval; Kelner, Roy; Rosen, Joseph

    2016-05-30

    We propose and demonstrate a new concept of incoherent digital holography termed coded aperture correlation holography (COACH). In COACH, the hologram of an object is formed by the interference of light diffracted from the object, with light diffracted from the same object, but that passes through a coded phase mask (CPM). Another hologram is recorded for a point object, under identical conditions and with the same CPM. This hologram is called the point spread function (PSF) hologram. The reconstructed image is obtained by correlating the object hologram with the PSF hologram. The image reconstruction of multiplane object using COACH was compared with that of other equivalent imaging systems, and has been found to possess a higher axial resolution compared to Fresnel incoherent correlation holography. PMID:27410157

  5. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-01

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  6. Establishing a MOEMS process to realise microshutters for coded aperture imaging applications

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Davies, Rhodri R.; Johnson, Ashley; Price, Nicola; Bennett, Charlotte R.; Slinger, Christopher W.; Hardy, Busbee; Hames, Greg; Monk, Demaul; Rogers, Stanley

    2011-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations used a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. Recently applications have emerged in the visible and infra red bands for low cost lens-less imaging systems and system studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. Previously reported work focused on realising such a mask to operate in the mid-IR band based on polysilicon micro-optoelectro-mechanical systems (MOEMS) technology and its integration with ASIC drive electronics using a tiled approach to scale to large format masks. The MOEMS chips employ interference effects to modulate incident light - achieved by tuning a large array of asymmetric Fabry-Perot optical cavities via an applied voltage using row/column addressing. In this paper we report on establishing the manufacturing process for such MOEMS microshutter chips in a commercial MEMS foundry, MEMSCAP - including the associated challenges in moving the technology out of the development laboratory into manufacturing. Small scale (7.3 x 7.3mm) and full size (22 x 22mm) MOEMS chips have been produced that are equivalent to those produced at QinetiQ. Optical and electrical testing has shown that these are suitable for integration into large format reconfigurable masks for coded aperture imaging applications.

  7. Coded aperture x-ray diffraction imaging with transmission computed tomography side-information

    NASA Astrophysics Data System (ADS)

    Odinaka, Ikenna; Greenberg, Joel A.; Kaganovsky, Yan; Holmgren, Andrew; Hassan, Mehadi; Politte, David G.; O'Sullivan, Joseph A.; Carin, Lawrence; Brady, David J.

    2016-03-01

    Coded aperture X-ray diffraction (coherent scatter spectral) imaging provides fast and dose-efficient measurements of the molecular structure of an object. The information provided is spatially-dependent and material-specific, and can be utilized in medical applications requiring material discrimination, such as tumor imaging. However, current coded aperture coherent scatter spectral imaging system assume a uniformly or weakly attenuating object, and are plagued by image degradation due to non-uniform self-attenuation. We propose accounting for such non-uniformities in the self-attenuation by utilizing an X-ray computed tomography (CT) image (reconstructed attenuation map). In particular, we present an iterative algorithm for coherent scatter spectral image reconstruction, which incorporates the attenuation map, at different stages, resulting in more accurate coherent scatter spectral images in comparison to their uncorrected counterpart. The algorithm is based on a spectrally grouped edge-preserving regularizer, where the neighborhood edge weights are determined by spatial distances and attenuation values.

  8. Modelling of a novel x-ray phase contrast imaging technique based on coded apertures

    NASA Astrophysics Data System (ADS)

    Olivo, A.; Speller, R.

    2007-11-01

    X-ray phase contrast imaging is probably the most relevant among emerging x-ray imaging techniques, and it has the proven potential of revolutionizing the field of diagnostic radiology. Impressive images of a wide range of samples have been obtained, mostly at synchrotron radiation facilities. The necessity of relying on synchrotron radiation has prevented to a large extent a widespread diffusion of phase contrast imaging, thus precluding its transfer to clinical practice. A new technique, based on the use of coded apertures, was recently developed at UCL. This technique was demonstrated to provide intense phase contrast signals with conventional x-ray sources and detectors. Unlike other attempts at making phase contrast imaging feasible with conventional sources, the coded-aperture approach does not impose substantial limitations and/or filtering of the radiation beam, and it therefore allows, for the first time, exposures compatible with clinical practice. The technique has been thoroughly modelled, and this paper describes the technique in detail by going through the different steps of the modelling. All the main factors influencing image quality are discussed, alongside the viability of realizing a prototype suitable for clinical use. The model has been experimentally validated and a section of the paper shows the comparison between simulated and experimental results.

  9. Adaptive Dynamic Event Tree in RAVEN code

    SciTech Connect

    Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego; Cogliati, Joshua Joseph; Kinoshita, Robert Arthur

    2014-11-01

    RAVEN is a software tool that is focused on performing statistical analysis of stochastic dynamic systems. RAVEN has been designed in a high modular and pluggable way in order to enable easy integration of different programming languages (i.e., C++, Python) and coupling with other applications (system codes). Among the several capabilities currently present in RAVEN, there are five different sampling strategies: Monte Carlo, Latin Hyper Cube, Grid, Adaptive and Dynamic Event Tree (DET) sampling methodologies. The scope of this paper is to present a new sampling approach, currently under definition and implementation: an evolution of the DET me

  10. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  11. ICAN Computer Code Adapted for Building Materials

    NASA Technical Reports Server (NTRS)

    Murthy, Pappu L. N.

    1997-01-01

    The NASA Lewis Research Center has been involved in developing composite micromechanics and macromechanics theories over the last three decades. These activities have resulted in several composite mechanics theories and structural analysis codes whose applications range from material behavior design and analysis to structural component response. One of these computer codes, the Integrated Composite Analyzer (ICAN), is designed primarily to address issues related to designing polymer matrix composites and predicting their properties - including hygral, thermal, and mechanical load effects. Recently, under a cost-sharing cooperative agreement with a Fortune 500 corporation, Master Builders Inc., ICAN was adapted to analyze building materials. The high costs and technical difficulties involved with the fabrication of continuous-fiber-reinforced composites sometimes limit their use. Particulate-reinforced composites can be thought of as a viable alternative. They are as easily processed to near-net shape as monolithic materials, yet have the improved stiffness, strength, and fracture toughness that is characteristic of continuous-fiber-reinforced composites. For example, particlereinforced metal-matrix composites show great potential for a variety of automotive applications, such as disk brake rotors, connecting rods, cylinder liners, and other hightemperature applications. Building materials, such as concrete, can be thought of as one of the oldest materials in this category of multiphase, particle-reinforced materials. The adaptation of ICAN to analyze particle-reinforced composite materials involved the development of new micromechanics-based theories. A derivative of the ICAN code, ICAN/PART, was developed and delivered to Master Builders Inc. as a part of the cooperative activity.

  12. Sensitivity of coded aperture Raman spectroscopy to analytes beneath turbid biological tissue and tissue-simulating phantoms.

    PubMed

    Maher, Jason R; Matthews, Thomas E; Reid, Ashley K; Katz, David F; Wax, Adam

    2014-01-01

    Traditional slit-based spectrometers have an inherent trade-off between spectral resolution and throughput that can limit their performance when measuring diffuse sources such as light returned from highly scattering biological tissue. Recently, multielement fiber bundles have been used to effectively measure diffuse sources, e.g., in the field of spatially offset Raman spectroscopy, by remapping the source (or some region of the source) into a slit shape for delivery to the spectrometer. Another approach is to change the nature of the instrument by using a coded entrance aperture, which can increase throughput without sacrificing spectral resolution.In this study, two spectrometers, one with a slit-based entrance aperture and the other with a coded aperture, were used to measure Raman spectra of an analyte as a function of the optical properties of an overlying scattering medium. Power-law fits reveal that the analyte signal is approximately proportional to the number of transport mean free paths of the scattering medium raised to a power of -0.47 (coded aperture instrument) or -1.09 (slit-based instrument). These results demonstrate that the attenuation in signal intensity is more pronounced for the slit-based instrument and highlight the scattering regimes where coded aperture instruments can provide an advantage over traditional slit-based spectrometers. PMID:25371979

  13. Sensitivity of coded aperture Raman spectroscopy to analytes beneath turbid biological tissue and tissue-simulating phantoms.

    PubMed

    Maher, Jason R; Matthews, Thomas E; Reid, Ashley K; Katz, David F; Wax, Adam

    2014-01-01

    Traditional slit-based spectrometers have an inherent trade-off between spectral resolution and throughput that can limit their performance when measuring diffuse sources such as light returned from highly scattering biological tissue. Recently, multielement fiber bundles have been used to effectively measure diffuse sources, e.g., in the field of spatially offset Raman spectroscopy, by remapping the source (or some region of the source) into a slit shape for delivery to the spectrometer. Another approach is to change the nature of the instrument by using a coded entrance aperture, which can increase throughput without sacrificing spectral resolution.In this study, two spectrometers, one with a slit-based entrance aperture and the other with a coded aperture, were used to measure Raman spectra of an analyte as a function of the optical properties of an overlying scattering medium. Power-law fits reveal that the analyte signal is approximately proportional to the number of transport mean free paths of the scattering medium raised to a power of -0.47 (coded aperture instrument) or -1.09 (slit-based instrument). These results demonstrate that the attenuation in signal intensity is more pronounced for the slit-based instrument and highlight the scattering regimes where coded aperture instruments can provide an advantage over traditional slit-based spectrometers.

  14. Sensitivity of coded aperture Raman spectroscopy to analytes beneath turbid biological tissue and tissue-simulating phantoms

    PubMed Central

    Maher, Jason R.; Matthews, Thomas E.; Reid, Ashley K.; Katz, David F.; Wax, Adam

    2014-01-01

    Abstract. Traditional slit-based spectrometers have an inherent trade-off between spectral resolution and throughput that can limit their performance when measuring diffuse sources such as light returned from highly scattering biological tissue. Recently, multielement fiber bundles have been used to effectively measure diffuse sources, e.g., in the field of spatially offset Raman spectroscopy, by remapping the source (or some region of the source) into a slit shape for delivery to the spectrometer. Another approach is to change the nature of the instrument by using a coded entrance aperture, which can increase throughput without sacrificing spectral resolution. In this study, two spectrometers, one with a slit-based entrance aperture and the other with a coded aperture, were used to measure Raman spectra of an analyte as a function of the optical properties of an overlying scattering medium. Power-law fits reveal that the analyte signal is approximately proportional to the number of transport mean free paths of the scattering medium raised to a power of −0.47 (coded aperture instrument) or −1.09 (slit-based instrument). These results demonstrate that the attenuation in signal intensity is more pronounced for the slit-based instrument and highlight the scattering regimes where coded aperture instruments can provide an advantage over traditional slit-based spectrometers. PMID:25371979

  15. Design criteria for small coded aperture masks in gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Sembay, S.; Gehrels, Neil

    1990-01-01

    Most theoretical work on coded aperture masks in X-ray and low-energy gamma-ray astronomy has concentrated on masks with large numbers of elements. For gamma-ray spectrometers in the MeV range, the detector plane usually has only a few discrete elements, so that masks with small numbers of elements are called for. For this case it is feasible to analyze by computer all the possible mask patterns of given dimension to find the ones that best satisfy the desired performance criteria. A particular set of performance criteria for comparing the flux sensitivities, source positioning accuracies and transparencies of different mask patterns is developed. The results of such a computer analysis for masks up to dimension 5 x 5 unit cell are presented and it is concluded that there is a great deal of flexibility in the choice of mask pattern for each dimension.

  16. Gamma ray imaging using coded aperture masks: a computer simulation approach.

    PubMed

    Jimenez, J; Olmos, P; Pablos, J L; Perez, J M

    1991-02-10

    The gamma-ray imaging using coded aperture masks as focusing elements is an extended technique for static position sensitive detectors. Several transfer functions have been proposed to implement mathematically the set of holes in the mask, the uniformly redundant array collimator being the most popular design. A considerable amount of work has been done to improve the digital methods to deconvolve the gamma-ray image, formed at the detector plane, with this transfer function. Here we present a study of the behavior of these techniques when applied to the geometric shadows produced by a set of point emitters. Comparison of the shape of the object reconstructed from these shadows with that resulting from the analytical reconstruction is performed, defining the validity ranges of the usual algorithmic approximations reported in the literature. Finally, several improvements are discussed. PMID:20582025

  17. Compatibility of Spatially Coded Apertures with a Miniature Mattauch-Herzog Mass Spectrograph

    NASA Astrophysics Data System (ADS)

    Russell, Zachary E.; DiDona, Shane T.; Amsden, Jason J.; Parker, Charles B.; Kibelka, Gottfried; Gehm, Michael E.; Glass, Jeffrey T.

    2016-04-01

    In order to minimize losses in signal intensity often present in mass spectrometry miniaturization efforts, we recently applied the principles of spatially coded apertures to magnetic sector mass spectrometry, thereby achieving increases in signal intensity of greater than 10× with no loss in mass resolution Chen et al. (J. Am. Soc. Mass Spectrom. 26, 1633-1640, 2015), Russell et al. (J. Am. Soc. Mass Spectrom. 26, 248-256, 2015). In this work, we simulate theoretical compatibility and demonstrate preliminary experimental compatibility of the Mattauch-Herzog mass spectrograph geometry with spatial coding. For the simulation-based theoretical assessment, COMSOL Multiphysics finite element solvers were used to simulate electric and magnetic fields, and a custom particle tracing routine was written in C# that allowed for calculations of more than 15 million particle trajectory time steps per second. Preliminary experimental results demonstrating compatibility of spatial coding with the Mattauch-Herzog geometry were obtained using a commercial miniature mass spectrograph from OI Analytical/Xylem.

  18. Compatibility of Spatially Coded Apertures with a Miniature Mattauch-Herzog Mass Spectrograph.

    PubMed

    Russell, Zachary E; DiDona, Shane T; Amsden, Jason J; Parker, Charles B; Kibelka, Gottfried; Gehm, Michael E; Glass, Jeffrey T

    2016-04-01

    In order to minimize losses in signal intensity often present in mass spectrometry miniaturization efforts, we recently applied the principles of spatially coded apertures to magnetic sector mass spectrometry, thereby achieving increases in signal intensity of greater than 10× with no loss in mass resolution Chen et al. (J. Am. Soc. Mass Spectrom. 26, 1633-1640, 2015), Russell et al. (J. Am. Soc. Mass Spectrom. 26, 248-256, 2015). In this work, we simulate theoretical compatibility and demonstrate preliminary experimental compatibility of the Mattauch-Herzog mass spectrograph geometry with spatial coding. For the simulation-based theoretical assessment, COMSOL Multiphysics finite element solvers were used to simulate electric and magnetic fields, and a custom particle tracing routine was written in C# that allowed for calculations of more than 15 million particle trajectory time steps per second. Preliminary experimental results demonstrating compatibility of spatial coding with the Mattauch-Herzog geometry were obtained using a commercial miniature mass spectrograph from OI Analytical/Xylem. PMID:26744293

  19. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  20. Evaluation of the cosmic-ray induced background in coded aperture high energy gamma-ray telescopes

    NASA Technical Reports Server (NTRS)

    Owens, Alan; Barbier, Loius M.; Frye, Glenn M.; Jenkins, Thomas L.

    1991-01-01

    While the application of coded-aperture techniques to high-energy gamma-ray astronomy offers potential arc-second angular resolution, concerns were raised about the level of secondary radiation produced in a thick high-z mask. A series of Monte-Carlo calculations are conducted to evaluate and quantify the cosmic-ray induced neutral particle background produced in a coded-aperture mask. It is shown that this component may be neglected, being at least a factor of 50 lower in intensity than the cosmic diffuse gamma-rays.

  1. General adaptive-neighborhood technique for improving synthetic aperture radar interferometric coherence estimation.

    PubMed

    Vasile, Gabriel; Trouvé, Emmanuel; Ciuc, Mihai; Buzuloiu, Vasile

    2004-08-01

    A new method for filtering the coherence map issued from synthetic aperture radar (SAR) interferometric data is presented. For each pixel of the interferogram, an adaptive neighborhood is determined by a region-growing technique driven by the information provided by the amplitude images. Then pixels in the derived adaptive neighborhood are complex averaged to yield the filtered value of the coherence, after a phase-compensation step is performed. An extension of the algorithm is proposed for polarimetric interferometric SAR images. The proposed method has been applied to both European Remote Sensing (ERS) satellite SAR images and airborne high-resolution polarimetric interferometric SAR images. Both subjective and objective performance analysis, including coherence edge detection, shows that the proposed method provides better results than the standard phase-compensated fixed multilook filter and the Lee adaptive coherence filter.

  2. Requirements for imaging vulnerable plaque in the coronary artery using a coded aperture imaging system

    NASA Astrophysics Data System (ADS)

    Tozian, Cynthia

    A coded aperture1 plate was employed on a conventional gamma camera for 3D single photon emission computed tomography (SPECT) imaging on small animal models. The coded aperture design was selected to improve the spatial resolution and decrease the minimum detectable activity (MDA) required to image plaque formation in the APoE (apolipoprotein E) gene deficient mouse model when compared to conventional SPECT techniques. The pattern that was tested was a no-two-holes-touching (NTHT) modified uniformly redundant array (MURA) having 1,920 pinholes. The number of pinholes combined with the thin sintered tungsten plate was designed to increase the efficiency of the imaging modality over conventional gamma camera imaging methods while improving spatial resolution and reducing noise in the image reconstruction. The MDA required to image the vulnerable plaque in a human cardiac-torso mathematical phantom was simulated with a Monte Carlo code and evaluated to determine the optimum plate thickness by a receiver operating characteristic (ROC) yielding the lowest possible MDA and highest area under the curve (AUC). A partial 3D expectation maximization (EM) reconstruction was developed to improve signal-to-noise ratio (SNR), dynamic range, and spatial resolution over the linear correlation method of reconstruction. This improvement was evaluated by imaging a mini hot rod phantom, simulating the dynamic range, and by performing a bone scan of the C-57 control mouse. Results of the experimental and simulated data as well as other plate designs were analyzed for use as a small animal and potentially human cardiac imaging modality for a radiopharmaceutical developed at Bristol-Myers Squibb Medical Imaging Company, North Billerica, MA, for diagnosing vulnerable plaques. If left untreated, these plaques may rupture causing sudden, unexpected coronary occlusion and death. The results of this research indicated that imaging and reconstructing with this new partial 3D algorithm improved

  3. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as

  4. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    SciTech Connect

    Joshi, S; Kaye, W; Jaworski, J; He, Z

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinhole camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for

  5. Target-adaptive polarimetric synthetic aperture radar target discrimination using maximum average correlation height filters.

    PubMed

    Sadjadi, Firooz A; Mahalanobis, Abhijit

    2006-05-01

    We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.

  6. Adaptive-neighborhood speckle removal in multitemporal synthetic aperture radar images.

    PubMed

    Ciuc, M; Bolon, P; Trouve, E; Buzuloiu, V; Rudant, J P

    2001-11-10

    We present a new method for multitemporal synthetic aperture radar image filtering using three-dimensional (3D) adaptive neighborhoods. The method takes both spatial and temporal information into account to derive the speckle-free value of a pixel. For each pixel individually, a 3D adaptive neighborhood is determined that contains only pixels belonging to the same distribution as the current pixel. Then statistics computed inside the established neighborhood are used to derive the filter output. It is shown that the method provides good results by drastically reducing speckle over homogeneous areas while retaining edges and thin structures. The performances of the proposed method are compared in terms of subjective and objective measures with those given by several classical speckle-filtering methods.

  7. Adaptive millimeter-wave synthetic aperture imaging for compressive sampling of sparse scenes.

    PubMed

    Mrozack, Alex; Heimbeck, Martin; Marks, Daniel L; Richard, Jonathan; Everitt, Henry O; Brady, David J

    2014-06-01

    We apply adaptive sensing techniques to the problem of locating sparse metallic scatterers using high-resolution, frequency modulated continuous wave W-band RADAR. Using a single detector, a frequency stepped source, and a lateral translation stage, inverse synthetic aperture RADAR reconstruction techniques are used to search for one or two wire scatterers within a specified range, while an adaptive algorithm determined successive sampling locations. The two-dimensional location of each scatterer is thereby identified with sub-wavelength accuracy in as few as 1/4 the number of lateral steps required for a simple raster scan. The implications of applying this approach to more complex scattering geometries are explored in light of the various assumptions made.

  8. Small animal imaging by single photon emission using pinhole and coded aperture collimation

    SciTech Connect

    Garibaldi, F.; Accorsi, R.; Cinti, M.N.; Colilli, S.; Cusanno, F.; De Vincentis, G.; Fortuna, A.; Girolami, B.; Giuliani, F.; Gricia, M.; Lanza, R.; Loizzo, A.; Loizzo, S.; Lucentini, M.; Majewski, S.; Santavenere, F.; Pani, R.; Pellegrini, R.; Signore, A.; Scopinaro, F.

    2005-06-01

    The aim of this paper is to investigate the basic properties and limits of the small animal imaging systems based on single photon detectors. The detectors for radio imaging of small animals are challenging because of the very high spatial resolution needed, possibly coupled with high efficiency to allow dynamic studies. These performances are hardly attainable with single photon technique because of the collimator that limits both spatial resolution and sensitivity. In this paper we describe a simple desktop detector based on pixellated NaI(Tl) scintillator array coupled with a pinhole collimator and a PSPMT, the Hamamatsu R2486. The limits of such systems as well as the way to overcome them will be shown. In fact better light sampling at the anode level would allow better pixel identification for higher number of pixel that is one of the parameters defining the image quality. Also the spatial resolution would improve. The performances of such layout are compared with others using PSPMTs differing from R2486 for the light sampling at the anode level and different areas. We show how a further step, namely the substitution of the pinhole collimator with a coded aperture, will allow a great improvement in system sensitivity while maintaining very good spatial resolution, possibly submillimetric. Calculations and simulations show that sensitivity would improve by a factor of 50.

  9. Broadband chirality-coded meta-aperture for photon-spin resolving

    PubMed Central

    Du, Luping; Kou, Shan Shan; Balaur, Eugeniu; Cadusch, Jasper J.; Roberts, Ann; Abbey, Brian; Yuan, Xiao-Cong; Tang, Dingyuan; Lin, Jiao

    2015-01-01

    The behaviour of light transmitted through an individual subwavelength aperture becomes counterintuitive in the presence of surrounding ‘decoration', a phenomenon known as the extraordinary optical transmission. Despite being polarization-sensitive, such an individual nano-aperture, however, often cannot differentiate between the two distinct spin-states of photons because of the loss of photon information on light-aperture interaction. This creates a ‘blind-spot' for the aperture with respect to the helicity of chiral light. Here we report the development of a subwavelength aperture embedded with metasurfaces dubbed a ‘meta-aperture', which breaks this spin degeneracy. By exploiting the phase-shaping capabilities of metasurfaces, we are able to create specific meta-apertures in which the pair of circularly polarized light spin-states produces opposite transmission spectra over a broad spectral range. The concept incorporating metasurfaces with nano-apertures provides a venue for exploring new physics on spin-aperture interaction and potentially has a broad range of applications in spin-optoelectronics and chiral sensing. PMID:26628047

  10. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  11. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  12. Generating code adapted for interlinking legacy scalar code and extended vector code

    DOEpatents

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  13. Unsupervised polarimetric synthetic aperture radar image classification based on sketch map and adaptive Markov random field

    NASA Astrophysics Data System (ADS)

    Shi, Junfei; Li, Lingling; Liu, Fang; Jiao, Licheng; Liu, Hongying; Yang, Shuyuan; Liu, Lu; Hao, Hongxia

    2016-04-01

    Markov random field (MRF) model is an effective tool for polarimetric synthetic aperture radar (PolSAR) image classification. However, due to the lack of suitable contextual information in conventional MRF methods, there is usually a contradiction between edge preservation and region homogeneity in the classification result. To preserve edge details and obtain homogeneous regions simultaneously, an adaptive MRF framework is proposed based on a polarimetric sketch map. The polarimetric sketch map can provide the edge positions and edge directions in detail, which can guide the selection of neighborhood structures. Specifically, the polarimetric sketch map is extracted to partition a PolSAR image into structural and nonstructural parts, and then adaptive neighborhoods are learned for two parts. For structural areas, geometric weighted neighborhood structures are constructed to preserve image details. For nonstructural areas, the maximum homogeneous regions are obtained to improve the region homogeneity. Experiments are taken on both the simulated and real PolSAR data, and the experimental results illustrate that the proposed method can obtain better performance on both region homogeneity and edge preservation than the state-of-the-art methods.

  14. GPU-based ultra-fast direct aperture optimization for online adaptive radiation therapy

    NASA Astrophysics Data System (ADS)

    Men, Chunhua; Jia, Xun; Jiang, Steve B.

    2010-08-01

    Online adaptive radiation therapy (ART) has great promise to significantly reduce normal tissue toxicity and/or improve tumor control through real-time treatment adaptations based on the current patient anatomy. However, the major technical obstacle for clinical realization of online ART, namely the inability to achieve real-time efficiency in treatment re-planning, has yet to be solved. To overcome this challenge, this paper presents our work on the implementation of an intensity-modulated radiation therapy (IMRT) direct aperture optimization (DAO) algorithm on the graphics processing unit (GPU) based on our previous work on the CPU. We formulate the DAO problem as a large-scale convex programming problem, and use an exact method called the column generation approach to deal with its extremely large dimensionality on the GPU. Five 9-field prostate and five 5-field head-and-neck IMRT clinical cases with 5 × 5 mm2 beamlet size and 2.5 × 2.5 × 2.5 mm3 voxel size were tested to evaluate our algorithm on the GPU. It takes only 0.7-3.8 s for our implementation to generate high-quality treatment plans on an NVIDIA Tesla C1060 GPU card. Our work has therefore solved a major problem in developing ultra-fast (re-)planning technologies for online ART.

  15. A CLOSE COMPANION SEARCH AROUND L DWARFS USING APERTURE MASKING INTERFEROMETRY AND PALOMAR LASER GUIDE STAR ADAPTIVE OPTICS

    SciTech Connect

    Bernat, David; Bouchez, Antonin H.; Cromer, John L.; Dekany, Richard G.; Moore, Anna M.; Ireland, Michael; Tuthill, Peter; Martinache, Frantz; Angione, John; Burruss, Rick S.; Guiwits, Stephen R.; Henning, John R.; Hickey, Jeff; Kibblewhite, Edward; McKenna, Daniel L.; Petrie, Harold L.; Roberts, Jennifer; Shelton, J. Chris; Thicksten, Robert P.; Trinh, Thang

    2010-06-01

    We present a close companion search around 16 known early L dwarfs using aperture masking interferometry with Palomar laser guide star adaptive optics (LGS AO). The use of aperture masking allows the detection of close binaries, corresponding to projected physical separations of 0.6-10.0 AU for the targets of our survey. This survey achieved median contrast limits of {Delta}K {approx} 2.3 for separations between 1.2 {lambda}/D-4{lambda}/D and {Delta}K {approx} 1.4 at 2/3 {lambda}/D. We present four candidate binaries detected with moderate-to-high confidence (90%-98%). Two have projected physical separations less than 1.5 AU. This may indicate that tight-separation binaries contribute more significantly to the binary fraction than currently assumed, consistent with spectroscopic and photometric overluminosity studies. Ten targets of this survey have previously been observed with the Hubble Space Telescope as part of companion searches. We use the increased resolution of aperture masking to search for close or dim companions that would be obscured by full aperture imaging, finding two candidate binaries. This survey is the first application of aperture masking with LGS AO at Palomar. Several new techniques for the analysis of aperture masking data in the low signal-to-noise regime are explored.

  16. Simpler Adaptive Selection of Golomb Power-of-Two Codes

    NASA Technical Reports Server (NTRS)

    Kiely, Aaron

    2007-01-01

    An alternative method of adaptive selection of Golomb power-of-two (GPO2) codes has been devised for use in efficient, lossless encoding of sequences of non-negative integers from discrete sources. The method is intended especially for use in compression of digital image data. This method is somewhat suboptimal, but offers the advantage in that it involves significantly less computation than does a prior method of adaptive selection of optimum codes through brute force application of all code options to every block of samples.

  17. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  18. Exo-planet Direct Imaging with On-Axis and/or Segmented Apertures in Space: Adaptive Compensation of Aperture Discontinuities

    NASA Astrophysics Data System (ADS)

    Soummer, Remi

    Capitalizing on a recent breakthrough in wavefront control theory for obscured apertures made by our group, we propose to demonstrate a method to achieve high contrast exoplanet imaging with on-axis obscured apertures. Our new algorithm, which we named Adaptive Compensation of Aperture Discontinuities (ACAD), provides the ability to compensate for aperture discontinuities (segment gaps and/or secondary mirror supports) by controlling deformable mirrors in a nonlinear wavefront control regime not utilized before but conceptually similar to the beam reshaping used in PIAA coronagraphy. We propose here an in-air demonstration at 1E- 7 contrast, enabled by adding a second deformable mirror to our current test-bed. This expansion of the scope of our current efforts in exoplanet imaging technologies will enabling us to demonstrate an integrated solution for wavefront control and starlight suppression on complex aperture geometries. It is directly applicable at scales from moderate-cost exoplanet probe missions to the 2.4 m AFTA telescopes to future flagship UVOIR observatories with apertures potentially 16-20 m. Searching for nearby habitable worlds with direct imaging is one of the top scientific priorities established by the Astro2010 Decadal Survey. Achieving this ambitious goal will require 1e-10 contrast on a telescope large enough to provide angular resolution and sensitivity to planets around a significant sample of nearby stars. Such a mission must of course also be realized at an achievable cost. Lightweight segmented mirror technology allows larger diameter optics to fit in any given launch vehicle as compared to monolithic mirrors, and lowers total life-cycle costs from construction through integration & test, making it a compelling option for future large space telescopes. At smaller scales, on-axis designs with secondary obscurations and supports are less challenging to fabricate and thus more affordable than the off-axis unobscured primary mirror designs

  19. Adaptive Modulation and Coding for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Hadi, S. S.; Tiong, T. C.

    2015-04-01

    Long Term Evolution (LTE) is the new upgrade path for carrier with both GSM/UMTS networks and CDMA2000 networks. The LTE is targeting to become the first global mobile phone standard regardless of the different LTE frequencies and bands use in other countries barrier. Adaptive Modulation and Coding (AMC) is used to increase the network capacity or downlink data rates. Various modulation types are discussed such as Quadrature Phase Shift Keying (QPSK), Quadrature Amplitude Modulation (QAM). Spatial multiplexing techniques for 4×4 MIMO antenna configuration is studied. With channel station information feedback from the mobile receiver to the base station transmitter, adaptive modulation and coding can be applied to adapt to the mobile wireless channels condition to increase spectral efficiencies without increasing bit error rate in noisy channels. In High-Speed Downlink Packet Access (HSDPA) in Universal Mobile Telecommunications System (UMTS), AMC can be used to choose modulation types and forward error correction (FEC) coding rate.

  20. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  1. An adaptive algorithm for motion compensated color image coding

    NASA Technical Reports Server (NTRS)

    Kwatra, Subhash C.; Whyte, Wayne A.; Lin, Chow-Ming

    1987-01-01

    This paper presents an adaptive algorithm for motion compensated color image coding. The algorithm can be used for video teleconferencing or broadcast signals. Activity segmentation is used to reduce the bit rate and a variable stage search is conducted to save computations. The adaptive algorithm is compared with the nonadaptive algorithm and it is shown that with approximately 60 percent savings in computing the motion vector and 33 percent additional compression, the performance of the adaptive algorithm is similar to the nonadaptive algorithm. The adaptive algorithm results also show improvement of up to 1 bit/pel over interframe DPCM coding with nonuniform quantization. The test pictures used for this study were recorded directly from broadcast video in color.

  2. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  3. The multidimensional self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1992-01-01

    This report describes the multidimensional self-adaptive grid code SAGE. A two-dimensional version of this code was described in an earlier report by the authors. The formulation of the multidimensional version is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code and provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simplified input options make this a flexible and user-friendly code. The new SAGE code can accommodate both two-dimensional and three-dimensional flow problems.

  4. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  5. Scalable hologram video coding for adaptive transmitting service.

    PubMed

    Seo, Young-Ho; Lee, Yoon-Hyuk; Yoo, Ji-Sang; Kim, Dong-Wook

    2013-01-01

    This paper discusses processing techniques for an adaptive digital holographic video service in various reconstruction environments, and proposes two new scalable coding schemes. The proposed schemes are constructed according to the hologram generation or acquisition schemes: hologram-based resolution-scalable coding (HRS) and light source-based signal-to-noise ratio scalable coding (LSS). HRS is applied for holograms that are already acquired or generated, while LSS is applied to the light sources before generating digital holograms. In the LSS scheme, the light source information is lossless coded because it is too important to lose, while the HRS scheme adopts a lossy coding method. In an experiment, we provide eight stages of an HRS scheme whose data compression ratios range from 1:1 to 100:1 for each layered data. For LSS, four layers and 16 layers of scalable coding schemes are provided. We experimentally show that the proposed techniques make it possible to service a digital hologram video adaptively to the various displays with different resolutions, computation capabilities of the receiver side, or bandwidths of the network.

  6. ENZO: AN ADAPTIVE MESH REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Bryan, Greg L.; Turk, Matthew J.; Norman, Michael L.; Bordner, James; Xu, Hao; Kritsuk, Alexei G.; O'Shea, Brian W.; Smith, Britton; Abel, Tom; Wang, Peng; Skillman, Samuel W.; Wise, John H.; Reynolds, Daniel R.; Collins, David C.; Harkness, Robert P.; Kim, Ji-hoon; Kuhlen, Michael; Goldbaum, Nathan; Hummels, Cameron; Collaboration: Enzo Collaboration; and others

    2014-04-01

    This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in one, two, and three dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the code's parallel performance, and discuss the Enzo collaboration's code development methodology.

  7. In situ X-ray beam imaging using an off-axis magnifying coded aperture camera system.

    PubMed

    Kachatkou, Anton; Kyele, Nicholas; Scott, Peter; van Silfhout, Roelof

    2013-07-01

    An imaging model and an image reconstruction algorithm for a transparent X-ray beam imaging and position measuring instrument are presented. The instrument relies on a coded aperture camera to record magnified images of the footprint of the incident beam on a thin foil placed in the beam at an oblique angle. The imaging model represents the instrument as a linear system whose impulse response takes into account the image blur owing to the finite thickness of the foil, the shape and size of camera's aperture and detector's point-spread function. The image reconstruction algorithm first removes the image blur using the modelled impulse response function and then corrects for geometrical distortions caused by the foil tilt. The performance of the image reconstruction algorithm was tested in experiments at synchrotron radiation beamlines. The results show that the proposed imaging system produces images of the X-ray beam cross section with a quality comparable with images obtained using X-ray cameras that are exposed to the direct beam.

  8. FLY: a Tree Code for Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Becciani, U.; Antonuccio-Delogu, V.; Costa, A.; Ferro, D.

    FLY is a public domain parallel treecode, which makes heavy use of the one-sided communication paradigm to handle the management of the tree structure. It implements the equations for cosmological evolution and can be run for different cosmological models. This paper shows an example of the integration of a tree N-body code with an adaptive mesh, following the PARAMESH scheme. This new implementation will allow the FLY output, and more generally any binary output, to be used with any hydrodynamics code that adopts the PARAMESH data structure, to study compressible flow problems.

  9. A Mechanically-Cooled, Highly-Portable, HPGe-Based, Coded-Aperture Gamma-Ray Imager

    SciTech Connect

    Ziock, Klaus-Peter; Boehnen, Chris Bensing; Hayward, Jason P; Raffo-Caiado, Ana Claudia

    2010-01-01

    Coded-aperture gamma-ray imaging is a mature technology that is capable of providing accurate and quantitative images of nuclear materials. Although it is potentially of high value to the safeguards and arms-control communities, it has yet to be fully embraced by those communities. One reason for this is the limited choice, high-cost, and low efficiency of commercial instruments; while instruments made by research organizations are frequently large and / or unsuitable for field work. In this paper we present the results of a project that mates the coded-aperture imaging approach with the latest in commercially-available, position-sensitive, High Purity Germanium (HPGe) detec-tors. The instrument replaces a laboratory prototype that, was unsuitable for other than demonstra-tions. The original instrument, and the cart on which it is mounted to provide mobility and pointing capabilities, has a footprint of ~ 2/3 m x 2 m, weighs ~ 100 Kg, and requires cryogen refills every few days. In contrast, the new instrument is tripod mounted, weighs of order 25 Kg, operates with a laptop computer, and is mechanically cooled. The instrument is being used in a program that is ex-ploring the use of combined radiation and laser scanner imaging. The former provides information on the presence, location, and type of nuclear materials while the latter provides design verification information. To align the gamma-ray images with the laser scanner data, the Ge imager is fitted and aligned to a visible-light stereo imaging unit. This unit generates a locus of 3D points that can be matched to the precise laser scanner data. With this approach, the two instruments can be used completely independently at a facility, and yet the data can be accurately overlaid based on the very structures that are being measured.

  10. Cellular Adaptation Facilitates Sparse and Reliable Coding in Sensory Pathways

    PubMed Central

    Farkhooi, Farzad; Froese, Anja; Muller, Eilif; Menzel, Randolf; Nawrot, Martin P.

    2013-01-01

    Most neurons in peripheral sensory pathways initially respond vigorously when a preferred stimulus is presented, but adapt as stimulation continues. It is unclear how this phenomenon affects stimulus coding in the later stages of sensory processing. Here, we show that a temporally sparse and reliable stimulus representation develops naturally in sequential stages of a sensory network with adapting neurons. As a modeling framework we employ a mean-field approach together with an adaptive population density treatment, accompanied by numerical simulations of spiking neural networks. We find that cellular adaptation plays a critical role in the dynamic reduction of the trial-by-trial variability of cortical spike responses by transiently suppressing self-generated fast fluctuations in the cortical balanced network. This provides an explanation for a widespread cortical phenomenon by a simple mechanism. We further show that in the insect olfactory system cellular adaptation is sufficient to explain the emergence of the temporally sparse and reliable stimulus representation in the mushroom body. Our results reveal a generic, biophysically plausible mechanism that can explain the emergence of a temporally sparse and reliable stimulus representation within a sequential processing architecture. PMID:24098101

  11. Adaptive shape coding for perceptual decisions in the human brain.

    PubMed

    Kourtzi, Zoe; Welchman, Andrew E

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  12. Adaptive shape coding for perceptual decisions in the human brain

    PubMed Central

    Kourtzi, Zoe; Welchman, Andrew E.

    2015-01-01

    In its search for neural codes, the field of visual neuroscience has uncovered neural representations that reflect the structure of stimuli of variable complexity from simple features to object categories. However, accumulating evidence suggests an adaptive neural code that is dynamically shaped by experience to support flexible and efficient perceptual decisions. Here, we review work showing that experience plays a critical role in molding midlevel visual representations for perceptual decisions. Combining behavioral and brain imaging measurements, we demonstrate that learning optimizes feature binding for object recognition in cluttered scenes, and tunes the neural representations of informative image parts to support efficient categorical judgements. Our findings indicate that similar learning mechanisms may mediate long-term optimization through development, tune the visual system to fundamental principles of feature binding, and optimize feature templates for perceptual decisions. PMID:26024511

  13. Adaptive neural coding: from biological to behavioral decision-making

    PubMed Central

    Louie, Kenway; Glimcher, Paul W.; Webb, Ryan

    2015-01-01

    Empirical decision-making in diverse species deviates from the predictions of normative choice theory, but why such suboptimal behavior occurs is unknown. Here, we propose that deviations from optimality arise from biological decision mechanisms that have evolved to maximize choice performance within intrinsic biophysical constraints. Sensory processing utilizes specific computations such as divisive normalization to maximize information coding in constrained neural circuits, and recent evidence suggests that analogous computations operate in decision-related brain areas. These adaptive computations implement a relative value code that may explain the characteristic context-dependent nature of behavioral violations of classical normative theory. Examining decision-making at the computational level thus provides a crucial link between the architecture of biological decision circuits and the form of empirical choice behavior. PMID:26722666

  14. AMRA: An Adaptive Mesh Refinement hydrodynamic code for astrophysics

    NASA Astrophysics Data System (ADS)

    Plewa, T.; Müller, E.

    2001-08-01

    Implementation details and test cases of a newly developed hydrodynamic code, amra, are presented. The numerical scheme exploits the adaptive mesh refinement technique coupled to modern high-resolution schemes which are suitable for relativistic and non-relativistic flows. Various physical processes are incorporated using the operator splitting approach, and include self-gravity, nuclear burning, physical viscosity, implicit and explicit schemes for conductive transport, simplified photoionization, and radiative losses from an optically thin plasma. Several aspects related to the accuracy and stability of the scheme are discussed in the context of hydrodynamic and astrophysical flows.

  15. Adaptive Synaptogenesis Constructs Neural Codes That Benefit Discrimination

    PubMed Central

    Thomas, Blake T.; Blalock, Davis W.; Levy, William B.

    2015-01-01

    Intelligent organisms face a variety of tasks requiring the acquisition of expertise within a specific domain, including the ability to discriminate between a large number of similar patterns. From an energy-efficiency perspective, effective discrimination requires a prudent allocation of neural resources with more frequent patterns and their variants being represented with greater precision. In this work, we demonstrate a biologically plausible means of constructing a single-layer neural network that adaptively (i.e., without supervision) meets this criterion. Specifically, the adaptive algorithm includes synaptogenesis, synaptic shedding, and bi-directional synaptic weight modification to produce a network with outputs (i.e. neural codes) that represent input patterns proportional to the frequency of related patterns. In addition to pattern frequency, the correlational structure of the input environment also affects allocation of neural resources. The combined synaptic modification mechanisms provide an explanation of neuron allocation in the case of self-taught experts. PMID:26176744

  16. SAGE: The Self-Adaptive Grid Code. 3

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1999-01-01

    The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.

  17. Coded aperture imaging of fusion source in a plasma focus operated with pure D{sub 2} and a D{sub 2}-Kr gas admixture

    SciTech Connect

    Springham, S. V.; Talebitaher, A.; Shutler, P. M. E.; Rawat, R. S.; Lee, P.; Lee, S.

    2012-09-10

    The coded aperture imaging (CAI) technique has been used to investigate the spatial distribution of DD fusion in a 1.6 kJ plasma focus (PF) device operated in, alternatively, pure deuterium or deuterium-krypton admixture. The coded mask pattern is based on a singer cyclic difference set with 25% open fraction and positioned close to 90 Degree-Sign to the plasma focus axis, with CR-39 detectors used to register tracks of protons from the D(d, p)T reaction. Comparing the coded aperture imaging proton images for pure D{sub 2} and D{sub 2}-Kr admixture operation reveals clear differences in size, density, and shape between the fusion sources for these two cases.

  18. A scalable multi-chip architecture to realise large-format microshutter arrays for coded aperture applications

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; King, David O.; Smith, Gilbert W.; Stone, Steven M.; Brown, Alan G.; Gordon, Neil T.; Slinger, Christopher W.; Cannon, Kevin; Riches, Stephen; Rogers, Stanley

    2009-08-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations used a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. Recently applications have emerged in the visible and infra red bands for low cost lens-less imaging systems and system studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. Previously we reported on the realization of a 2x2cm single chip mask in the mid-IR based on polysilicon micro-opto-electro-mechanical systems (MOEMS) technology and its integration with ASIC drive electronics using conventional wire bonding. The MOEMS architecture employs interference effects to modulate incident light - achieved by tuning a large array of asymmetric Fabry-Perot optical cavities via an applied voltage and uses a hysteretic row/column scheme for addressing. In this paper we present the latest transmission results in the mid-IR band (3-5μm) and report on progress in developing a scalable architecture based on a tiled approach using multiple 2 x 2cm MOEMS chips with associated control ASICs integrated using flip chip technology. Initial work has focused on a 2 x 2 tiled array as a stepping stone towards an 8 x 8 array.

  19. An Adaptive Motion Estimation Scheme for Video Coding

    PubMed Central

    Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised. PMID:24672313

  20. An adaptive motion estimation scheme for video coding.

    PubMed

    Liu, Pengyu; Gao, Yuan; Jia, Kebin

    2014-01-01

    The unsymmetrical-cross multihexagon-grid search (UMHexagonS) is one of the best fast Motion Estimation (ME) algorithms in video encoding software. It achieves an excellent coding performance by using hybrid block matching search pattern and multiple initial search point predictors at the cost of the computational complexity of ME increased. Reducing time consuming of ME is one of the key factors to improve video coding efficiency. In this paper, we propose an adaptive motion estimation scheme to further reduce the calculation redundancy of UMHexagonS. Firstly, new motion estimation search patterns have been designed according to the statistical results of motion vector (MV) distribution information. Then, design a MV distribution prediction method, including prediction of the size of MV and the direction of MV. At last, according to the MV distribution prediction results, achieve self-adaptive subregional searching by the new estimation search patterns. Experimental results show that more than 50% of total search points are dramatically reduced compared to the UMHexagonS algorithm in JM 18.4 of H.264/AVC. As a result, the proposed algorithm scheme can save the ME time up to 20.86% while the rate-distortion performance is not compromised.

  1. Cooperative solutions coupling a geometry engine and adaptive solver codes

    NASA Technical Reports Server (NTRS)

    Dickens, Thomas P.

    1995-01-01

    Follow-on work has progressed in using Aero Grid and Paneling System (AGPS), a geometry and visualization system, as a dynamic real time geometry monitor, manipulator, and interrogator for other codes. In particular, AGPS has been successfully coupled with adaptive flow solvers which iterate, refining the grid in areas of interest, and continuing on to a solution. With the coupling to the geometry engine, the new grids represent the actual geometry much more accurately since they are derived directly from the geometry and do not use refits to the first-cut grids. Additional work has been done with design runs where the geometric shape is modified to achieve a desired result. Various constraints are used to point the solution in a reasonable direction which also more closely satisfies the desired results. Concepts and techniques are presented, as well as examples of sample case studies. Issues such as distributed operation of the cooperative codes versus running all codes locally and pre-calculation for performance are discussed. Future directions are considered which will build on these techniques in light of changing computer environments.

  2. Olfactory coding in Drosophila larvae investigated by cross-adaptation.

    PubMed

    Boyle, Jennefer; Cobb, Matthew

    2005-09-01

    In order to reveal aspects of olfactory coding, the effects of sensory adaptation on the olfactory responses of first-instar Drosophila melanogaster larvae were tested. Larvae were pre-stimulated with a homologous series of acetic esters (C3-C9), and their responses to each of these odours were then measured. The overall patterns suggested that methyl acetate has no specific pathway but was detected by all the sensory pathways studied here, that butyl and pentyl acetate tended to have similar effects to each other and that hexyl acetate was processed separately from the other odours. In a number of cases, cross-adaptation transformed a control attractive response into a repulsive response; in no case was an increase in attractiveness observed. This was investigated by studying changes in dose-response curves following pre-stimulation. These findings are discussed in light of the possible intra- and intercellular mechanisms of adaptation and the advantage of altered sensitivity for the larva. PMID:16155221

  3. Method for measuring the focal spot size of an x-ray tube using a coded aperture mask and a digital detector

    SciTech Connect

    Russo, Paolo; Mettivier, Giovanni

    2011-04-15

    Purpose: The goal of this study is to evaluate a new method based on a coded aperture mask combined with a digital x-ray imaging detector for measurements of the focal spot sizes of diagnostic x-ray tubes. Common techniques for focal spot size measurements employ a pinhole camera, a slit camera, or a star resolution pattern. The coded aperture mask is a radiation collimator consisting of a large number of apertures disposed on a predetermined grid in an array, through which the radiation source is imaged onto a digital x-ray detector. The method of the coded mask camera allows one to obtain a one-shot accurate and direct measurement of the two dimensions of the focal spot (like that for a pinhole camera) but at a low tube loading (like that for a slit camera). A large number of small apertures in the coded mask operate as a ''multipinhole'' with greater efficiency than a single pinhole, but keeping the resolution of a single pinhole. Methods: X-ray images result from the multiplexed output on the detector image plane of such a multiple aperture array, and the image of the source is digitally reconstructed with a deconvolution algorithm. Images of the focal spot of a laboratory x-ray tube (W anode: 35-80 kVp; focal spot size of 0.04 mm) were acquired at different geometrical magnifications with two different types of digital detector (a photon counting hybrid silicon pixel detector with 0.055 mm pitch and a flat panel CMOS digital detector with 0.05 mm pitch) using a high resolution coded mask (type no-two-holes-touching modified uniformly redundant array) with 480 0.07 mm apertures, designed for imaging at energies below 35 keV. Measurements with a slit camera were performed for comparison. A test with a pinhole camera and with the coded mask on a computed radiography mammography unit with 0.3 mm focal spot was also carried out. Results: The full width at half maximum focal spot sizes were obtained from the line profiles of the decoded images, showing a focal spot

  4. N-Body Code with Adaptive Mesh Refinement

    NASA Astrophysics Data System (ADS)

    Yahagi, Hideki; Yoshii, Yuzuru

    2001-09-01

    We have developed a simulation code with the techniques that enhance both spatial and time resolution of the particle-mesh (PM) method, for which the spatial resolution is restricted by the spacing of structured mesh. The adaptive-mesh refinement (AMR) technique subdivides the cells that satisfy the refinement criterion recursively. The hierarchical meshes are maintained by the special data structure and are modified in accordance with the change of particle distribution. In general, as the resolution of the simulation increases, its time step must be shortened and more computational time is required to complete the simulation. Since the AMR enhances the spatial resolution locally, we reduce the time step locally also, instead of shortening it globally. For this purpose, we used a technique of hierarchical time steps (HTS), which changes the time step, from particle to particle, depending on the size of the cell in which particles reside. Some test calculations show that our implementation of AMR and HTS is successful. We have performed cosmological simulation runs based on our code and found that many of halo objects have density profiles that are well fitted to the universal profile proposed in 1996 by Navarro, Frenk, & White over the entire range of their radius.

  5. 3D Finite Element Trajectory Code with Adaptive Meshing

    NASA Astrophysics Data System (ADS)

    Ives, Lawrence; Bui, Thuc; Vogler, William; Bauer, Andy; Shephard, Mark; Beal, Mark; Tran, Hien

    2004-11-01

    Beam Optics Analysis, a new, 3D charged particle program is available and in use for the design of complex, 3D electron guns and charged particle devices. The code reads files directly from most CAD and solid modeling programs, includes an intuitive Graphical User Interface (GUI), and a robust mesh generator that is fully automatic. Complex problems can be set up, and analysis initiated in minutes. The program includes a user-friendly post processor for displaying field and trajectory data using 3D plots and images. The electrostatic solver is based on the standard nodal finite element method. The magnetostatic field solver is based on the vector finite element method and is also called during the trajectory simulation process to solve for self magnetic fields. The user imports the geometry from essentially any commercial CAD program and uses the GUI to assign parameters (voltages, currents, dielectric constant) and designate emitters (including work function, emitter temperature, and number of trajectories). The the mesh is generated automatically and analysis is performed, including mesh adaptation to improve accuracy and optimize computational resources. This presentation will provide information on the basic structure of the code, its operation, and it's capabilities.

  6. RAM: a Relativistic Adaptive Mesh Refinement Hydrodynamics Code

    SciTech Connect

    Zhang, Wei-Qun; MacFadyen, Andrew I.; /Princeton, Inst. Advanced Study

    2005-06-06

    The authors have developed a new computer code, RAM, to solve the conservative equations of special relativistic hydrodynamics (SRHD) using adaptive mesh refinement (AMR) on parallel computers. They have implemented a characteristic-wise, finite difference, weighted essentially non-oscillatory (WENO) scheme using the full characteristic decomposition of the SRHD equations to achieve fifth-order accuracy in space. For time integration they use the method of lines with a third-order total variation diminishing (TVD) Runge-Kutta scheme. They have also implemented fourth and fifth order Runge-Kutta time integration schemes for comparison. The implementation of AMR and parallelization is based on the FLASH code. RAM is modular and includes the capability to easily swap hydrodynamics solvers, reconstruction methods and physics modules. In addition to WENO they have implemented a finite volume module with the piecewise parabolic method (PPM) for reconstruction and the modified Marquina approximate Riemann solver to work with TVD Runge-Kutta time integration. They examine the difficulty of accurately simulating shear flows in numerical relativistic hydrodynamics codes. They show that under-resolved simulations of simple test problems with transverse velocity components produce incorrect results and demonstrate the ability of RAM to correctly solve these problems. RAM has been tested in one, two and three dimensions and in Cartesian, cylindrical and spherical coordinates. they have demonstrated fifth-order accuracy for WENO in one and two dimensions and performed detailed comparison with other schemes for which they show significantly lower convergence rates. Extensive testing is presented demonstrating the ability of RAM to address challenging open questions in relativistic astrophysics.

  7. Simulating ion beam extraction from a single aperture triode acceleration column: A comparison of the beam transport codes IGUN and PBGUNS with test stand data

    SciTech Connect

    Patel, A.; Wills, J. S. C.; Diamond, W. T.

    2008-04-15

    Ion beam extraction from two different ion sources with single aperture triode extraction columns was simulated with the particle beam transport codes PBGUNS and IGUN. For each ion source, the simulation results are compared to experimental data generated on well-equipped test stands. Both codes reproduced the qualitative behavior of the extracted ion beams to incremental and scaled changes to the extraction electrode geometry observed on the test stands. Numerical values of optimum beam currents and beam emittance generated by the simulations also agree well with test stand data.

  8. Adaptive distributed video coding with correlation estimation using expectation propagation

    NASA Astrophysics Data System (ADS)

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-01

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  9. Adaptive Distributed Video Coding with Correlation Estimation using Expectation Propagation.

    PubMed

    Cui, Lijuan; Wang, Shuang; Jiang, Xiaoqian; Cheng, Samuel

    2012-10-15

    Distributed video coding (DVC) is rapidly increasing in popularity by the way of shifting the complexity from encoder to decoder, whereas no compression performance degrades, at least in theory. In contrast with conventional video codecs, the inter-frame correlation in DVC is explored at decoder based on the received syndromes of Wyner-Ziv (WZ) frame and side information (SI) frame generated from other frames available only at decoder. However, the ultimate decoding performances of DVC are based on the assumption that the perfect knowledge of correlation statistic between WZ and SI frames should be available at decoder. Therefore, the ability of obtaining a good statistical correlation estimate is becoming increasingly important in practical DVC implementations. Generally, the existing correlation estimation methods in DVC can be classified into two main types: pre-estimation where estimation starts before decoding and on-the-fly (OTF) estimation where estimation can be refined iteratively during decoding. As potential changes between frames might be unpredictable or dynamical, OTF estimation methods usually outperforms pre-estimation techniques with the cost of increased decoding complexity (e.g., sampling methods). In this paper, we propose a low complexity adaptive DVC scheme using expectation propagation (EP), where correlation estimation is performed OTF as it is carried out jointly with decoding of the factor graph-based DVC code. Among different approximate inference methods, EP generally offers better tradeoff between accuracy and complexity. Experimental results show that our proposed scheme outperforms the benchmark state-of-the-art DISCOVER codec and other cases without correlation tracking, and achieves comparable decoding performance but with significantly low complexity comparing with sampling method.

  10. The characterization and optimization of NIO1 ion source extraction aperture using a 3D particle-in-cell code.

    PubMed

    Taccogna, F; Minelli, P; Cavenago, M; Veltri, P; Ippolito, N

    2016-02-01

    The geometry of a single aperture in the extraction grid plays a relevant role for the optimization of negative ion transport and extraction probability in a hybrid negative ion source. For this reason, a three-dimensional particle-in-cell/Monte Carlo collision model of the extraction region around the single aperture including part of the source and part of the acceleration (up to the extraction grid (EG) middle) regions has been developed for the new aperture design prepared for negative ion optimization 1 source. Results have shown that the dimension of the flat and chamfered parts and the slope of the latter in front of the source region maximize the product of production rate and extraction probability (allowing the best EG field penetration) of surface-produced negative ions. The negative ion density in the plane yz has been reported. PMID:26932027

  11. Adaptive phase-coded reconstruction for cardiac CT

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Mayo, John; Acharya, Kishor; Pan, Tin-Su

    2000-04-01

    Cardiac imaging with conventional computed tomography (CT) has gained significant attention in recent years. New hardware development enables a CT scanner to rotate at a faster speed so that less cardiac motion is present in acquired projection data. Many new tomographic reconstruction techniques have also been developed to reduce the artifacts induced by the cardiac motion. Most of the algorithms make use of the projection data collected over several cardiac cycles to formulate a single projection data set. Because the data set is formed with samples collected roughly in the same phase of a cardiac cycle, the temporal resolution of the newly formed data set is significantly improved compared with projections collected continuously. In this paper, we present an adaptive phase- coded reconstruction scheme (APR) for cardiac CT. Unlike the previously proposed schemes where the projection sector size is identical, APR determines each sector size based on the tomographic reconstruction algorithm. The newly proposed scheme ensures that the temporal resolution of each sector is substantially equal. In addition, the scan speed is selected based on the measured EKG signal of the patient.

  12. Adaptive Source Coding Schemes for Geometrically Distributed Integer Alphabets

    NASA Technical Reports Server (NTRS)

    Cheung, K-M.; Smyth, P.

    1993-01-01

    Revisit the Gallager and van Voorhis optimal source coding scheme for geometrically distributed non-negative integer alphabets and show that the various subcodes in the popular Rice algorithm can be derived from the Gallager and van Voorhis code.

  13. Was Wright right? The canonical genetic code is an empirical example of an adaptive peak in nature; deviant genetic codes evolved using adaptive bridges.

    PubMed

    Seaborg, David M

    2010-08-01

    The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of "adaptive bridges," neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept.

  14. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  15. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-01

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK. PMID:27505775

  16. Studies of the chromatic properties and dynamic aperture of the BNL colliding-beam accelerator. [PATRICIA particle tracking code

    SciTech Connect

    Dell, G.F.

    1983-01-01

    The PATRICIA particle tracking program has been used to study chromatic effects in the Brookhaven CBA (Colliding Beam Accelerator). The short term behavior of particles in the CBA has been followed for particle histories of 300 turns. Contributions from magnet multipoles characteristic of superconducting magnets and closed orbit errors have been included in determining the dynamic aperture of the CBA for on and off momentum particles. The width of the third integer stopband produced by the temperature dependence of magnetization induced sextupoles in the CBA cable dipoles is evaluated for helium distribution systems having periodicity of one and six. The stopband width at a tune of 68/3 is naturally zero for the system having a periodicity of six and is approx. 10/sup -4/ for the system having a periodicity of one. Results from theory are compared with results obtained with PATRICIA; the results agree within a factor of slightly more than two.

  17. Adaptations in a Community-Based Family Intervention: Replication of Two Coding Schemes.

    PubMed

    Cooper, Brittany Rhoades; Shrestha, Gitanjali; Hyman, Leah; Hill, Laura

    2016-02-01

    Although program adaptation is a reality in community-based implementations of evidence-based programs, much of the discussion about adaptation remains theoretical. The primary aim of this study was to replicate two coding systems to examine adaptations in large-scale, community-based disseminations of the Strengthening Families Program for Parents and Youth 10-14, a family-based substance use prevention program. Our second aim was to explore intersections between various dimensions of facilitator-reported adaptations from these two coding systems. Our results indicate that only a few types of adaptations and a few reasons accounted for a majority (over 70 %) of all reported adaptations. We also found that most adaptations were logistical, reactive, and not aligned with program's goals. In many ways, our findings replicate those of the original studies, suggesting the two coding systems are robust even when applied to self-reported data collected from community-based implementations. Our findings on the associations between adaptation dimensions can inform future studies assessing the relationship between adaptations and program outcomes. Studies of local adaptations, like the present one, should help researchers, program developers, and policymakers better understand the issues faced by implementers and guide efforts related to program development, transferability, and sustainability. PMID:26661413

  18. Adaptive Mesh Refinement Algorithms for Parallel Unstructured Finite Element Codes

    SciTech Connect

    Parsons, I D; Solberg, J M

    2006-02-03

    This project produced algorithms for and software implementations of adaptive mesh refinement (AMR) methods for solving practical solid and thermal mechanics problems on multiprocessor parallel computers using unstructured finite element meshes. The overall goal is to provide computational solutions that are accurate to some prescribed tolerance, and adaptivity is the correct path toward this goal. These new tools will enable analysts to conduct more reliable simulations at reduced cost, both in terms of analyst and computer time. Previous academic research in the field of adaptive mesh refinement has produced a voluminous literature focused on error estimators and demonstration problems; relatively little progress has been made on producing efficient implementations suitable for large-scale problem solving on state-of-the-art computer systems. Research issues that were considered include: effective error estimators for nonlinear structural mechanics; local meshing at irregular geometric boundaries; and constructing efficient software for parallel computing environments.

  19. Adaptation reduces variability of the neuronal population code

    NASA Astrophysics Data System (ADS)

    Farkhooi, Farzad; Muller, Eilif; Nawrot, Martin P.

    2011-05-01

    Sequences of events in noise-driven excitable systems with slow variables often show serial correlations among their intervals of events. Here, we employ a master equation for generalized non-renewal processes to calculate the interval and count statistics of superimposed processes governed by a slow adaptation variable. For an ensemble of neurons with spike-frequency adaptation, this results in the regularization of the population activity and an enhanced postsynaptic signal decoding. We confirm our theoretical results in a population of cortical neurons recorded in vivo.

  20. ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES

    SciTech Connect

    D. T. Clark; M. J. Russell; R. E. Spears; S. R. Jensen

    2009-07-01

    With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components with the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite

  1. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition.

  2. Adaptive face space coding in congenital prosopagnosia: typical figural aftereffects but abnormal identity aftereffects.

    PubMed

    Palermo, Romina; Rivolta, Davide; Wilson, C Ellie; Jeffery, Linda

    2011-12-01

    People with congenital prosopagnosia (CP) report difficulty recognising faces in everyday life and perform poorly on face recognition tests. Here, we investigate whether impaired adaptive face space coding might contribute to poor face recognition in CP. To pinpoint how adaptation may affect face processing, a group of CPs and matched controls completed two complementary face adaptation tasks: the figural aftereffect, which reflects adaptation to general distortions of shape, and the identity aftereffect, which directly taps the mechanisms involved in the discrimination of different face identities. CPs displayed a typical figural aftereffect, consistent with evidence that they are able to process some shape-based information from faces, e.g., cues to discriminate sex. CPs also demonstrated a significant identity aftereffect. However, unlike controls, CPs impression of the identity of the neutral average face was not significantly shifted by adaptation, suggesting that adaptive coding of identity is abnormal in CP. In sum, CPs show reduced aftereffects but only when the task directly taps the use of face norms used to code individual identity. This finding of a reduced face identity aftereffect in individuals with severe face recognition problems is consistent with suggestions that adaptive coding may have a functional role in face recognition. PMID:21986295

  3. Adaptive Zero-Coefficient Distribution Scan for Inter Block Mode Coding of H.264/AVC

    NASA Astrophysics Data System (ADS)

    Wang, Jing-Xin; Su, Alvin W. Y.

    Scanning quantized transform coefficients is an important tool for video coding. For example, the MPEG-4 video coder adopts three different scans to get better coding efficiency. This paper proposes an adaptive zero-coefficient distribution scan in inter block coding. The proposed method attempts to improve H.264/AVC zero coefficient coding by modifying the scan operation. Since the zero-coefficient distribution is changed by the proposed scan method, new VLC tables for syntax elements used in context-adaptive variable length coding (CAVLC) are also provided. The savings in bit-rate range from 2.2% to 5.1% in the high bit-rate cases, depending on different test sequences.

  4. A framework for evaluating wavelet based watermarking for scalable coded digital item adaptation attacks

    NASA Astrophysics Data System (ADS)

    Bhowmik, Deepayan; Abhayaratne, Charith

    2009-02-01

    A framework for evaluating wavelet based watermarking schemes against scalable coded visual media content adaptation attacks is presented. The framework, Watermark Evaluation Bench for Content Adaptation Modes (WEBCAM), aims to facilitate controlled evaluation of wavelet based watermarking schemes under MPEG-21 part-7 digital item adaptations (DIA). WEBCAM accommodates all major wavelet based watermarking in single generalised framework by considering a global parameter space, from which the optimum parameters for a specific algorithm may be chosen. WEBCAM considers the traversing of media content along various links and required content adaptations at various nodes of media supply chains. In this paper, the content adaptation is emulated by the JPEG2000 coded bit stream extraction for various spatial resolution and quality levels of the content. The proposed framework is beneficial not only as an evaluation tool but also as design tool for new wavelet based watermark algorithms by picking and mixing of available tools and finding the optimum design parameters.

  5. Deficits in context-dependent adaptive coding of reward in schizophrenia.

    PubMed

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism's ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  6. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  7. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. I. ALGORITHM

    SciTech Connect

    Maron, Jason L.; McNally, Colin P.; Mac Low, Mordecai-Mark E-mail: cmcnally@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. Local, third-order, least-squares, polynomial interpolations (Moving Least Squares interpolations) are calculated from the field values of neighboring particles to obtain field values and spatial derivatives at the particle position. Field values and particle positions are advanced in time with a second-order predictor-corrector scheme. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is implemented to ensure the particles fill the computational volume, which gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. Particle addition and deletion is based on a local void and clump detection algorithm. Dynamic artificial viscosity fields provide stability to the integration. The resulting algorithm provides a robust solution for modeling flows that require Lagrangian or adaptive discretizations to resolve. This paper derives and documents the Phurbas algorithm as implemented in Phurbas version 1.1. A following paper presents the implementation and test problem results.

  8. Effects of adaptation on neural coding by primary sensory interneurons in the cricket cercal system.

    PubMed

    Clague, H; Theunissen, F; Miller, J P

    1997-01-01

    Methods of stochastic systems analysis were applied to examine the effect of adaptation on frequency encoding by two functionally identical primary interneurons of the cricket cercal system. Stimulus reconstructions were obtained from a linear filtering transformation of spike trains elicited in response to bursts of broadband white noise air current stimuli (5-400 Hz). Each linear reconstruction was compared with the actual stimulus in the frequency domain to obtain a measure of waveform coding accuracy as a function of frequency. The term adaptation in this paper refers to the decrease in firing rate of a cell after the onset or increase in power of a white noise stimulus. The increase in firing rate after stimulus offset or decrease in stimulus power is assumed to be a complementary aspect of the same phenomenon. As the spike rate decreased during the course of adaptation, the total amount of information carried about the velocity waveform of the stimulus also decreased. The quality of coding of frequencies between 70 and 400 Hz decreased dramatically. The quality of coding of frequencies between 5 and 70 Hz decreased only slightly or even increased in some cases. The disproportionate loss of information about the higher frequencies could be attributed in part to the more rapid loss of spikes correlated with high-frequency stimulus components than of spikes correlated with low-frequency components. An increase in the responsiveness of a cell to frequencies > 70 Hz was correlated with a decrease in the ability of that cell to encode frequencies in the 5-70 Hz range. This nonlinear property could explain the improvement seen in some cases in the coding accuracy of frequencies between 5 and 70 Hz during the course of adaptation. Waveform coding properties also were characterized for fully adapted neurons at several stimulus intensities. The changes in coding observed through the course of adaptation were similar in nature to those found across stimulus powers

  9. Image subband coding using context-based classification and adaptive quantization.

    PubMed

    Yoo, Y; Ortega, A; Yu, B

    1999-01-01

    Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.

  10. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  11. Application of adaptive subband coding for noisy bandlimited ECG signal processing

    NASA Astrophysics Data System (ADS)

    Aditya, Krishna; Chu, Chee-Hung H.; Szu, Harold H.

    1996-03-01

    An approach to impulsive noise suppression and background normalization of digitized bandlimited electrovcardiogram signals is presented. This approach uses adaptive wavelet filters that incorporate the band-limited a priori information and the shape information of a signal to decompose the data. Empirical results show that the new algorithm has good performance in wideband impulsive noise suppression and background normalization for subsequent wave detection, when compared with subband coding using Daubechie's D4 wavelet, without the bandlimited adaptive wavelet transform.

  12. Context-adaptive binary arithmetic coding with precise probability estimation and complexity scalability for high-efficiency video coding

    NASA Astrophysics Data System (ADS)

    Karwowski, Damian; Domański, Marek

    2016-01-01

    An improved context-based adaptive binary arithmetic coding (CABAC) is presented. The idea for the improvement is to use a more accurate mechanism for estimation of symbol probabilities in the standard CABAC algorithm. The authors' proposal of such a mechanism is based on the context-tree weighting technique. In the framework of a high-efficiency video coding (HEVC) video encoder, the improved CABAC allows 0.7% to 4.5% bitrate saving compared to the original CABAC algorithm. The application of the proposed algorithm marginally affects the complexity of HEVC video encoder, but the complexity of video decoder increases by 32% to 38%. In order to decrease the complexity of video decoding, a new tool has been proposed for the improved CABAC that enables scaling of the decoder complexity. Experiments show that this tool gives 5% to 7.5% reduction of the decoding time while still maintaining high efficiency in the data compression.

  13. Simultaneously sparse and low-rank hyperspectral image recovery from coded aperture compressive measurements via convex optimization

    NASA Astrophysics Data System (ADS)

    Gélvez, Tatiana C.; Rueda, Hoover F.; Arguello, Henry

    2016-05-01

    A hyperspectral image (HSI) can be described as a set of images with spatial information across different spectral bands. Compressive spectral imaging techniques (CSI) permit to capture a 3-dimensional hyperspectral scene using 2 dimensional coded and multiplexed projections. Recovering the original scene from a very few projections can be valuable in applications such as remote sensing, video surveillance and biomedical imaging. Typically, HSI exhibit high correlations both, in the spatial and spectral dimensions. Thus, exploiting these correlations allows to accurately recover the original scene from compressed measurements. Traditional approaches exploit the sparsity of the scene when represented in a proper basis. For this purpose, an optimization problem that seeks to minimize a joint l2 - l1 norm is solved to obtain the original scene. However, there exist some HSI with an important feature which does not have been widely exploited; HSI are commonly low rank, thus only a few number of spectral signatures are presented in the image. Therefore, this paper proposes an approach to recover a simultaneous sparse and low rank hyperspectral image by exploiting both features at the same time. The proposed approach solves an optimization problem that seeks to minimize the l2-norm, penalized by the l1-norm, to force the solution to be sparse, and penalized by the nuclear norm to force the solution to be low rank. Theoretical analysis along with a set of simulations over different data sets show that simultaneously exploiting low rank and sparse structures enhances the performance of the recovery algorithm and the quality of the recovered image with an average improvement of around 3 dB in terms of the peak-signal to noise ratio (PSNR).

  14. A 2x2 multi-chip reconfigurable MOEMS mask: a stepping stone to large format microshutter arrays for coded aperture applications

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Brown, Alan G.; King, David O.; Smith, Gilbert W.; Gordon, Neil T.; Riches, Stephen; Rogers, Stanley

    2010-08-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations used a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. Recently applications have emerged in the visible and infra red bands for low cost lens-less imaging systems and system studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. Previously reported work focused on realising a 2x2cm single chip mask in the mid-IR based on polysilicon micro-optoelectro- mechanical systems (MOEMS) technology and its integration with ASIC drive electronics using conventional wire bonding. It employs interference effects to modulate incident light - achieved by tuning a large array of asymmetric Fabry-Perot optical cavities via an applied voltage and uses a hysteretic row/column scheme for addressing. In this paper we report on the latest results in the mid-IR for the single chip reconfigurable MOEMS mask, trials in scaling up to a mask based on a 2x2 multi-chip array and report on progress towards realising a large format mask comprising 44 MOEMS chips. We also explore the potential of such large, transmissive IR spatial light modulator arrays for other applications and in the current and alternative architectures.

  15. QoS-Aware Error Recovery in Wireless Body Sensor Networks Using Adaptive Network Coding

    PubMed Central

    Razzaque, Mohammad Abdur; Javadi, Saeideh S.; Coulibaly, Yahaya; Hira, Muta Tah

    2015-01-01

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts. PMID:25551485

  16. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms.

  17. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PMID:24684315

  18. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Astrophysics Data System (ADS)

    Chen, J.-H.; Gersho, A.

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  19. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  20. Context-Adaptive Arithmetic Coding Scheme for Lossless Bit Rate Reduction of MPEG Surround in USAC

    NASA Astrophysics Data System (ADS)

    Yoon, Sungyong; Pang, Hee-Suk; Sung, Koeng-Mo

    We propose a new coding scheme for lossless bit rate reduction of the MPEG Surround module in unified speech and audio coding (USAC). The proposed scheme is based on context-adaptive arithmetic coding for efficient bit stream composition of spatial parameters. Experiments show that it achieves the significant lossless bit reduction of 9.93% to 12.14% for spatial parameters and 8.64% to 8.96% for the overall MPEG Surround bit streams compared to the original scheme. The proposed scheme, which is not currently included in USAC, can be used for the improved coding efficiency of MPEG Surround in USAC, where the saved bits can be utilized by the other modules in USAC.

  1. The development and application of the self-adaptive grid code, SAGE

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.

    1993-01-01

    The multidimensional self-adaptive grid code, SAGE, has proven to be a flexible and useful tool in the solution of complex flow problems. Both 2- and 3-D examples given in this report show the code to be reliable and to substantially improve flowfield solutions. Since the adaptive procedure is a marching scheme the code is extremely fast and uses insignificant CPU time compared to the corresponding flow solver. The SAGE program is also machine and flow solver independent. Significant effort was made to simplify user interaction, though some parameters still need to be chosen with care. It is also difficult to tell when the adaption process has provided its best possible solution. This is particularly true if no experimental data are available or if there is a lack of theoretical understanding of the flow. Another difficulty occurs if local features are important but missing in the original grid; the adaption to this solution will not result in any improvement, and only grid refinement can result in an improved solution. These are complex issues that need to be explored within the context of each specific problem.

  2. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  3. GAMER: A GRAPHIC PROCESSING UNIT ACCELERATED ADAPTIVE-MESH-REFINEMENT CODE FOR ASTROPHYSICS

    SciTech Connect

    Schive, H.-Y.; Tsai, Y.-C.; Chiueh Tzihong

    2010-02-01

    We present the newly developed code, GPU-accelerated Adaptive-MEsh-Refinement code (GAMER), which adopts a novel approach in improving the performance of adaptive-mesh-refinement (AMR) astrophysical simulations by a large factor with the use of the graphic processing unit (GPU). The AMR implementation is based on a hierarchy of grid patches with an oct-tree data structure. We adopt a three-dimensional relaxing total variation diminishing scheme for the hydrodynamic solver and a multi-level relaxation scheme for the Poisson solver. Both solvers have been implemented in GPU, by which hundreds of patches can be advanced in parallel. The computational overhead associated with the data transfer between the CPU and GPU is carefully reduced by utilizing the capability of asynchronous memory copies in GPU, and the computing time of the ghost-zone values for each patch is diminished by overlapping it with the GPU computations. We demonstrate the accuracy of the code by performing several standard test problems in astrophysics. GAMER is a parallel code that can be run in a multi-GPU cluster system. We measure the performance of the code by performing purely baryonic cosmological simulations in different hardware implementations, in which detailed timing analyses provide comparison between the computations with and without GPU(s) acceleration. Maximum speed-up factors of 12.19 and 10.47 are demonstrated using one GPU with 4096{sup 3} effective resolution and 16 GPUs with 8192{sup 3} effective resolution, respectively.

  4. Development of three-dimensional hydrodynamical and MHD codes using Adaptive Mesh Refinement scheme with TVD

    NASA Astrophysics Data System (ADS)

    den, M.; Yamashita, K.; Ogawa, T.

    A three-dimensional (3D) hydrodynamical (HD) and magneto-hydrodynamical (MHD) simulation codes using an adaptive mesh refinement (AMR) scheme are developed. This method places fine grids over areas of interest such as shock waves in order to obtain high resolution and places uniform grids with lower resolution in other area. Thus AMR scheme can provide a combination of high solution accuracy and computational robustness. We demonstrate numerical results for a simplified model of a shock propagation, which strongly indicate that the AMR techniques have the ability to resolve disturbances in an interplanetary space. We also present simulation results for MHD code.

  5. A Dual-Sided Coded-Aperture Radiation Detection System , Nuclear Instruments & Methods in Physics Research Section A-Accelerators Spectrometers Detectors and Associated Equipment

    SciTech Connect

    Ziock, Klaus-Peter; Fabris, Lorenzo

    2010-01-01

    We report the development of a large-area, mobile, coded-aperture radiation imaging system for localizing compact radioactive sources in three dimensions while rejecting distributed background. The 3D Stand-Off Radiation Detection System (SORDS-3D) has been tested at speeds up to 95 km/h and has detected and located sources in the millicurie range at distances of over 100 m. Radiation data are imaged to a geospatially mapped world grid with a nominal 1.25- to 2.5-m pixel pitch at distances out to 120 m on either side of the platform. Source elevation is also extracted. Imaged radiation alarms are superimposed on a side-facing video log that can be played back for direct localization of sources in buildings in urban environments. The system utilizes a 37-element array of 5 x 5 x 50 cm{sup 3} cesium-iodide (sodium) detectors. Scintillation light is collected by a pair of photomultiplier tubes placed at either end of each detector, with the detectors achieving an energy resolution of 6.15% FWHM (662 keV) and a position resolution along their length of 5 cm FWHM. The imaging system generates a dual-sided two-dimensional image allowing users to efficiently survey a large area. Imaged radiation data and raw spectra are forwarded to the RadioNuclide Analysis Kit (RNAK), developed by our collaborators, for isotope ID. An intuitive real-time display aids users in performing searches. Detector calibration is dynamically maintained by monitoring the potassium-40 peak and digitally adjusting individual detector gains. We have recently realized improvements, both in isotope identification and in distinguishing compact sources from background, through the installation of optimal-filter reconstruction kernels.

  6. Adaptive software-defined coded modulation for ultra-high-speed optical transport

    NASA Astrophysics Data System (ADS)

    Djordjevic, Ivan B.; Zhang, Yequn

    2013-10-01

    In optically-routed networks, different wavelength channels carrying the traffic to different destinations can have quite different optical signal-to-noise ratios (OSNRs) and signal is differently impacted by various channel impairments. Regardless of the data destination, an optical transport system (OTS) must provide the target bit-error rate (BER) performance. To provide target BER regardless of the data destination we adjust the forward error correction (FEC) strength. Depending on the information obtained from the monitoring channels, we select the appropriate code rate matching to the OSNR range that current channel OSNR falls into. To avoid frame synchronization issues, we keep the codeword length fixed independent of the FEC code being employed. The common denominator is the employment of quasi-cyclic (QC-) LDPC codes in FEC. For high-speed implementation, low-complexity LDPC decoding algorithms are needed, and some of them will be described in this invited paper. Instead of conventional QAM based modulation schemes, we employ the signal constellations obtained by optimum signal constellation design (OSCD) algorithm. To improve the spectral efficiency, we perform the simultaneous rate adaptation and signal constellation size selection so that the product of number of bits per symbol × code rate is closest to the channel capacity. Further, we describe the advantages of using 4D signaling instead of polarization-division multiplexed (PDM) QAM, by using the 4D MAP detection, combined with LDPC coding, in a turbo equalization fashion. Finally, to solve the problems related to the limited bandwidth of information infrastructure, high energy consumption, and heterogeneity of optical networks, we describe an adaptive energy-efficient hybrid coded-modulation scheme, which in addition to amplitude, phase, and polarization state employs the spatial modes as additional basis functions for multidimensional coded-modulation.

  7. Context adaptive lossless and near-lossless coding for digital angiographies.

    PubMed

    dos Santos, Rafael A P; Scharcanski, Jacob

    2007-01-01

    This paper presents a context adaptive coding method for image sequences in hemodynamics. The proposed method implements motion compensation through of a two-stage context adaptive linear predictor. It is robust to the local intensity changes and the noise that often degrades these image sequences, and provides lossless and near-lossless quality. Our preliminary experiments with lossless compression of 12 bits/pixel studies indicate that, potentially, our approach can perform 3.8%, 2% and 1.6% better than JPEG-2000, JPEG-LS and the method proposed in [1], respectively. The performance tends to improve for near-lossless compression.

  8. An edge-based solution-adaptive method applied to the AIRPLANE code

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Thomas, Scott D.; Cliff, Susan E.

    1995-01-01

    Computational methods to solve large-scale realistic problems in fluid flow can be made more efficient and cost effective by using them in conjunction with dynamic mesh adaption procedures that perform simultaneous coarsening and refinement to capture flow features of interest. This work couples the tetrahedral mesh adaption scheme, 3D_TAG, with the AIRPLANE code to solve complete aircraft configuration problems in transonic and supersonic flow regimes. Results indicate that the near-field sonic boom pressure signature of a cone-cylinder is improved, the oblique and normal shocks are better resolved on a transonic wing, and the bow shock ahead of an unstarted inlet is better defined.

  9. FLAG: A multi-dimensional adaptive free-Lagrange code for fully unstructured grids

    SciTech Connect

    Burton, D.E.; Miller, D.S.; Palmer, T.

    1995-07-01

    The authors describe FLAG, a 3D adaptive free-Lagrange method for unstructured grids. The grid elements were 3D polygons, which move with the flow, and are refined or reconnected as necessary to achieve uniform accuracy. The authors stressed that they were able to construct a 3D hydro version of this code in 3 months, using an object-oriented FORTRAN approach.

  10. CRASH: A BLOCK-ADAPTIVE-MESH CODE FOR RADIATIVE SHOCK HYDRODYNAMICS-IMPLEMENTATION AND VERIFICATION

    SciTech Connect

    Van der Holst, B.; Toth, G.; Sokolov, I. V.; Myra, E. S.; Fryxell, B.; Drake, R. P.; Powell, K. G.; Holloway, J. P.; Stout, Q.; Adams, M. L.; Morel, J. E.; Karni, S.

    2011-06-01

    We describe the Center for Radiative Shock Hydrodynamics (CRASH) code, a block-adaptive-mesh code for multi-material radiation hydrodynamics. The implementation solves the radiation diffusion model with a gray or multi-group method and uses a flux-limited diffusion approximation to recover the free-streaming limit. Electrons and ions are allowed to have different temperatures and we include flux-limited electron heat conduction. The radiation hydrodynamic equations are solved in the Eulerian frame by means of a conservative finite-volume discretization in either one-, two-, or three-dimensional slab geometry or in two-dimensional cylindrical symmetry. An operator-split method is used to solve these equations in three substeps: (1) an explicit step of a shock-capturing hydrodynamic solver; (2) a linear advection of the radiation in frequency-logarithm space; and (3) an implicit solution of the stiff radiation diffusion, heat conduction, and energy exchange. We present a suite of verification test problems to demonstrate the accuracy and performance of the algorithms. The applications are for astrophysics and laboratory astrophysics. The CRASH code is an extension of the Block-Adaptive Tree Solarwind Roe Upwind Scheme (BATS-R-US) code with a new radiation transfer and heat conduction library and equation-of-state and multi-group opacity solvers. Both CRASH and BATS-R-US are part of the publicly available Space Weather Modeling Framework.

  11. An adaptive source-channel coding with feedback for progressive transmission of medical images.

    PubMed

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design.

  12. An Adaptive Source-Channel Coding with Feedback for Progressive Transmission of Medical Images

    PubMed Central

    Lo, Jen-Lung; Sanei, Saeid; Nazarpour, Kianoush

    2009-01-01

    A novel adaptive source-channel coding with feedback for progressive transmission of medical images is proposed here. In the source coding part, the transmission starts from the region of interest (RoI). The parity length in the channel code varies with respect to both the proximity of the image subblock to the RoI and the channel noise, which is iteratively estimated in the receiver. The overall transmitted data can be controlled by the user (clinician). In the case of medical data transmission, it is vital to keep the distortion level under control as in most of the cases certain clinically important regions have to be transmitted without any visible error. The proposed system significantly reduces the transmission time and error. Moreover, the system is very user friendly since the selection of the RoI, its size, overall code rate, and a number of test features such as noise level can be set by the users in both ends. A MATLAB-based TCP/IP connection has been established to demonstrate the proposed interactive and adaptive progressive transmission system. The proposed system is simulated for both binary symmetric channel (BSC) and Rayleigh channel. The experimental results verify the effectiveness of the design. PMID:19190770

  13. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  14. ALEGRA -- A massively parallel h-adaptive code for solid dynamics

    SciTech Connect

    Summers, R.M.; Wong, M.K.; Boucheron, E.A.; Weatherby, J.R.

    1997-12-31

    ALEGRA is a multi-material, arbitrary-Lagrangian-Eulerian (ALE) code for solid dynamics designed to run on massively parallel (MP) computers. It combines the features of modern Eulerian shock codes, such as CTH, with modern Lagrangian structural analysis codes using an unstructured grid. ALEGRA is being developed for use on the teraflop supercomputers to conduct advanced three-dimensional (3D) simulations of shock phenomena important to a variety of systems. ALEGRA was designed with the Single Program Multiple Data (SPMD) paradigm, in which the mesh is decomposed into sub-meshes so that each processor gets a single sub-mesh with approximately the same number of elements. Using this approach the authors have been able to produce a single code that can scale from one processor to thousands of processors. A current major effort is to develop efficient, high precision simulation capabilities for ALEGRA, without the computational cost of using a global highly resolved mesh, through flexible, robust h-adaptivity of finite elements. H-adaptivity is the dynamic refinement of the mesh by subdividing elements, thus changing the characteristic element size and reducing numerical error. The authors are working on several major technical challenges that must be met to make effective use of HAMMER on MP computers.

  15. Less can be more: RNA-adapters may enhance coding capacity of replicators.

    PubMed

    de Boer, Folkert K; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary 'functional' structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This 'RNA-adapter' can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in order to

  16. Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ R.; Gorjian, Varoujan; Rebull, Luisa M.; Masci, Frank J.; Fowler, John W.; Helou, George; Kulkarni, Shrinivas R.; Law, Nicholas M.

    2012-07-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It is a graphical user interface (GUI) designed to allow the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. The finely tuned layout of the GUI, along with judicious use of color-coding and alerting, is intended to give maximal user utility and convenience. Simply mouse-clicking on a source in the displayed image will instantly draw a circular or elliptical aperture and sky annulus around the source and will compute the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs with just the push of a button, including image histogram, x and y aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has many functions for customizing the calculations, including outlier rejection, pixel "picking" and "zapping," and a selection of source and sky models. The radial-profile-interpolation source model

  17. THE PLUTO CODE FOR ADAPTIVE MESH COMPUTATIONS IN ASTROPHYSICAL FLUID DYNAMICS

    SciTech Connect

    Mignone, A.; Tzeferacos, P.; Zanni, C.; Bodo, G.; Van Straalen, B.; Colella, P.

    2012-01-01

    We present a description of the adaptive mesh refinement (AMR) implementation of the PLUTO code for solving the equations of classical and special relativistic magnetohydrodynamics (MHD and RMHD). The current release exploits, in addition to the static grid version of the code, the distributed infrastructure of the CHOMBO library for multidimensional parallel computations over block-structured, adaptively refined grids. We employ a conservative finite-volume approach where primary flow quantities are discretized at the cell center in a dimensionally unsplit fashion using the Corner Transport Upwind method. Time stepping relies on a characteristic tracing step where piecewise parabolic method, weighted essentially non-oscillatory, or slope-limited linear interpolation schemes can be handily adopted. A characteristic decomposition-free version of the scheme is also illustrated. The solenoidal condition of the magnetic field is enforced by augmenting the equations with a generalized Lagrange multiplier providing propagation and damping of divergence errors through a mixed hyperbolic/parabolic explicit cleaning step. Among the novel features, we describe an extension of the scheme to include non-ideal dissipative processes, such as viscosity, resistivity, and anisotropic thermal conduction without operator splitting. Finally, we illustrate an efficient treatment of point-local, potentially stiff source terms over hierarchical nested grids by taking advantage of the adaptivity in time. Several multidimensional benchmarks and applications to problems of astrophysical relevance assess the potentiality of the AMR version of PLUTO in resolving flow features separated by large spatial and temporal disparities.

  18. Less Can Be More: RNA-Adapters May Enhance Coding Capacity of Replicators

    PubMed Central

    de Boer, Folkert K.; Hogeweg, Paulien

    2012-01-01

    It is still not clear how prebiotic replicators evolved towards the complexity found in present day organisms. Within the most realistic scenario for prebiotic evolution, known as the RNA world hypothesis, such complexity has arisen from replicators consisting solely of RNA. Within contemporary life, remarkably many RNAs are involved in modifying other RNAs. In hindsight, such RNA-RNA modification might have helped in alleviating the limits of complexity posed by the information threshold for RNA-only replicators. Here we study the possible role of such self-modification in early evolution, by modeling the evolution of protocells as evolving replicators, which have the opportunity to incorporate these mechanisms as a molecular tool. Evolution is studied towards a set of 25 arbitrary ‘functional’ structures, while avoiding all other (misfolded) structures, which are considered to be toxic and increase the death-rate of a protocell. The modeled protocells contain a genotype of different RNA-sequences while their phenotype is the ensemble of secondary structures they can potentially produce from these RNA-sequences. One of the secondary structures explicitly codes for a simple sequence-modification tool. This ‘RNA-adapter’ can block certain positions on other RNA-sequences through antisense base-pairing. The altered sequence can produce an alternative secondary structure, which may or may not be functional. We show that the modifying potential of interacting RNA-sequences enables these protocells to evolve high fitness under high mutation rates. Moreover, our model shows that because of toxicity of misfolded molecules, redundant coding impedes the evolution of self-modification machinery, in effect restraining the evolvability of coding structures. Hence, high mutation rates can actually promote the evolution of complex coding structures by reducing redundant coding. Protocells can successfully use RNA-adapters to modify their genotype-phenotype mapping in

  19. Motion-adaptive model-assisted compatible coding with spatiotemporal scalability

    NASA Astrophysics Data System (ADS)

    Lee, JaeBeom; Eleftheriadis, Alexandros

    1997-01-01

    We introduce the concept of motion adaptive spatio-temporal model-assisted compatible (MA-STMAC) coding, a technique to selectively encode areas of different importance to the human eye in terms of space and time in moving images with the consideration of object motion. PRevious STMAC was proposed base don the fact that human 'eye contact' and 'lip synchronization' are very important in person-to-person communication. Several areas including the eyes and lips need different types of quality, since different areas have different perceptual significance to human observers. The approach provides a better rate-distortion tradeoff than conventional image coding techniques base don MPEG-1, MPEG- 2, H.261, as well as H.263. STMAC coding is applied on top of an encoder, taking full advantage of its core design. Model motion tracking in our previous STMAC approach was not automatic. The proposed MA-STMAC coding considers the motion of the human face within the STMAC concept using automatic area detection. Experimental results are given using ITU-T H.263, addressing very low bit-rate compression.

  20. Adaptation of TRIPND Field Line Tracing Code to a Shaped, Poloidal Divertor Geometry

    NASA Astrophysics Data System (ADS)

    Monat, P.; Moyer, R. A.; Evans, T. E.

    2001-10-01

    The magnetic field line tracing code TRIPND(T.E. Evans, Proc. 18th Conf. on Control. Fusion and Plasma Phys., Berlin, Germany, Vol. 15C, Part II (European Physical Society, 1991) p. 65.) has been modified to use the axisymmetric equilibrium magnetic fields from an EFIT reconstruction in place of circular equilibria with multi-filament current profile expansions. This adaptation provides realistic plasma current profiles in shaped geometries. A major advantage of this modification is that it allows investigation of magnetic field line trajectories in any device for which an EFIT reconstruction is available. The TRIPND code has been used to study the structure of the magnetic field line topology in circular, limiter tokamaks, including Tore Supra and TFTR and has been benchmarked against the GOURDON code used in Europe for magnetic field line tracing. The new version of the code, called TRIP3D, is used to investigate the sensitivity of various shaped equilibria to non-axisymmetric perturbations such as a shifted F coil or error field correction coils.

  1. Hierarchical prediction and context adaptive coding for lossless color image compression.

    PubMed

    Kim, Seyun; Cho, Nam Ik

    2014-01-01

    This paper presents a new lossless color image compression algorithm, based on the hierarchical prediction and context-adaptive arithmetic coding. For the lossless compression of an RGB image, it is first decorrelated by a reversible color transform and then Y component is encoded by a conventional lossless grayscale image compression method. For encoding the chrominance images, we develop a hierarchical scheme that enables the use of upper, left, and lower pixels for the pixel prediction, whereas the conventional raster scan prediction methods use upper and left pixels. An appropriate context model for the prediction error is also defined and the arithmetic coding is applied to the error signal corresponding to each context. For several sets of images, it is shown that the proposed method further reduces the bit rates compared with JPEG2000 and JPEG-XR.

  2. Radiographic image sequence coding using adaptive finite-state vector quantization

    NASA Astrophysics Data System (ADS)

    Joo, Chang-Hee; Choi, Jong S.

    1990-11-01

    Vector quantization is an effective spatial domain image coding technique at under 1 . 0 bits per pixel. To achieve the quality at lower rates it is necessary to exploit spatial redundancy over a larger region of pixels than is possible with memoryless VQ. A fmite state vector quant. izer can achieve the same performance as memoryless VQ at lower rates. This paper describes an athptive finite state vector quantization for radiographic image sequence coding. Simulation experiment has been carried out with 4*4 blocks of pixels from a sequence of cardiac angiogram consisting of 40 frames of size 256*256pixels each. At 0. 45 bpp the resulting adaptive FSVQ encoder achieves performance comparable to earlier memoryless VQs at 0. 8 bpp.

  3. PHURBAS: AN ADAPTIVE, LAGRANGIAN, MESHLESS, MAGNETOHYDRODYNAMICS CODE. II. IMPLEMENTATION AND TESTS

    SciTech Connect

    McNally, Colin P.; Mac Low, Mordecai-Mark; Maron, Jason L. E-mail: jmaron@amnh.org

    2012-05-01

    We present an algorithm for simulating the equations of ideal magnetohydrodynamics and other systems of differential equations on an unstructured set of points represented by sample particles. The particles move with the fluid, so the time step is not limited by the Eulerian Courant-Friedrichs-Lewy condition. Full spatial adaptivity is required to ensure the particles fill the computational volume and gives the algorithm substantial flexibility and power. A target resolution is specified for each point in space, with particles being added and deleted as needed to meet this target. We have parallelized the code by adapting the framework provided by GADGET-2. A set of standard test problems, including 10{sup -6} amplitude linear magnetohydrodynamics waves, magnetized shock tubes, and Kelvin-Helmholtz instabilities is presented. Finally, we demonstrate good agreement with analytic predictions of linear growth rates for magnetorotational instability in a cylindrical geometry. This paper documents the Phurbas algorithm as implemented in Phurbas version 1.1.

  4. Channel Error Propagation In Predictor Adaptive Differential Pulse Code Modulation (DPCM) Coders

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; Rao, K. R.

    1980-11-01

    New adaptive differential pulse code modulation (ADPCM) coders with adaptive prediction are proposed and compared with existing non-adaptive DPCM coders, for processing composite National Television System Commission (NTSC) television signals. Comparisons are based on quantitative criteria as well as subjective evaluation of the processed still frames. The performance of the proposed predictors is shown to be independent of well-designed quantizers and better than existing predictors in such critical regions of the pictures as edges ind contours. Test data consists of four color images with varying levels of activity, color and detail. The adaptive predictors, however, are sensitive to channel errors. Propagation of transmission noise is dependent on the type of prediction and on location of noise i.e., whether in an uniform region or in an active region. The transmission error propagation for different predictors is investigated. By introducing leak in predictor output and/or predictor function it is shown that this propagation can be significantly reduced. The combination predictors not only attenuate and/or terminate the channel error propagation but also improve the predictor performance based on quantitative evaluation such as essential peak value and mean square error between the original and reconstructed images.

  5. Optimal joint power-rate adaptation for error resilient video coding

    NASA Astrophysics Data System (ADS)

    Lin, Yuan; Gürses, Eren; Kim, Anna N.; Perkis, Andrew

    2008-01-01

    In recent years digital imaging devices become an integral part of our daily lives due to the advancements in imaging, storage and wireless communication technologies. Power-Rate-Distortion efficiency is the key factor common to all resource constrained portable devices. In addition, especially in real-time wireless multimedia applications, channel adaptive and error resilient source coding techniques should be considered in conjunction with the P-R-D efficiency, since most of the time Automatic Repeat-reQuest (ARQ) and Forward Error Correction (FEC) are either not feasible or costly in terms of bandwidth efficiency delay. In this work, we focus on the scenarios of real-time video communication for resource constrained devices over bandwidth limited and lossy channels, and propose an analytic Power-channel Error-Rate-Distortion (P-E-R-D) model. In particular, probabilities of macroblocks coding modes are intelligently controlled through an optimization process according to their distinct rate-distortion-complexity performance for a given channel error rate. The framework provides theoretical guidelines for the joint analysis of error resilient source coding and resource allocation. Experimental results show that our optimal framework provides consistent rate-distortion performance gain under different power constraints.

  6. Automatic network-adaptive ultra-low-bit-rate video coding

    NASA Astrophysics Data System (ADS)

    Chien, Wei-Jung; Lam, Tuyet-Trang; Abousleman, Glen P.; Karam, Lina J.

    2006-05-01

    This paper presents a software-only, real-time video coder/decoder (codec) for use with low-bandwidth channels where the bandwidth is unknown or varies with time. The codec incorporates a modified JPEG2000 core and interframe predictive coding, and can operate with network bandwidths of less than 1 kbits/second. The encoder and decoder establish two virtual connections over a single IP-based communications link. The first connection is UDP/IP guaranteed throughput, which is used to transmit the compressed video stream in real time, while the second is TCP/IP guaranteed delivery, which is used for two-way control and compression parameter updating. The TCP/IP link serves as a virtual feedback channel and enables the decoder to instruct the encoder to throttle back the transmission bit rate in response to the measured packet loss ratio. It also enables either side to initiate on-the-fly parameter updates such as bit rate, frame rate, frame size, and correlation parameter, among others. The codec also incorporates frame-rate throttling whereby the number of frames decoded is adjusted based upon the available processing resources. Thus, the proposed codec is capable of automatically adjusting the transmission bit rate and decoding frame rate to adapt to any network scenario. Video coding results for a variety of network bandwidths and configurations are presented to illustrate the vast capabilities of the proposed video coding system.

  7. Evaluation of in-network adaptation of scalable high efficiency video coding (SHVC) in mobile environments

    NASA Astrophysics Data System (ADS)

    Nightingale, James; Wang, Qi; Grecos, Christos; Goma, Sergio

    2014-02-01

    High Efficiency Video Coding (HEVC), the latest video compression standard (also known as H.265), can deliver video streams of comparable quality to the current H.264 Advanced Video Coding (H.264/AVC) standard with a 50% reduction in bandwidth. Research into SHVC, the scalable extension to the HEVC standard, is still in its infancy. One important area for investigation is whether, given the greater compression ratio of HEVC (and SHVC), the loss of packets containing video content will have a greater impact on the quality of delivered video than is the case with H.264/AVC or its scalable extension H.264/SVC. In this work we empirically evaluate the layer-based, in-network adaptation of video streams encoded using SHVC in situations where dynamically changing bandwidths and datagram loss ratios require the real-time adaptation of video streams. Through the use of extensive experimentation, we establish a comprehensive set of benchmarks for SHVC-based highdefinition video streaming in loss prone network environments such as those commonly found in mobile networks. Among other results, we highlight that packet losses of only 1% can lead to a substantial reduction in PSNR of over 3dB and error propagation in over 130 pictures following the one in which the loss occurred. This work would be one of the earliest studies in this cutting-edge area that reports benchmark evaluation results for the effects of datagram loss on SHVC picture quality and offers empirical and analytical insights into SHVC adaptation to lossy, mobile networking conditions.

  8. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code. PMID:27607718

  9. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-01

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  10. Spatiotemporal Spike Coding of Behavioral Adaptation in the Dorsal Anterior Cingulate Cortex.

    PubMed

    Logiaco, Laureline; Quilodran, René; Procyk, Emmanuel; Arleo, Angelo

    2015-08-01

    The frontal cortex controls behavioral adaptation in environments governed by complex rules. Many studies have established the relevance of firing rate modulation after informative events signaling whether and how to update the behavioral policy. However, whether the spatiotemporal features of these neuronal activities contribute to encoding imminent behavioral updates remains unclear. We investigated this issue in the dorsal anterior cingulate cortex (dACC) of monkeys while they adapted their behavior based on their memory of feedback from past choices. We analyzed spike trains of both single units and pairs of simultaneously recorded neurons using an algorithm that emulates different biologically plausible decoding circuits. This method permits the assessment of the performance of both spike-count and spike-timing sensitive decoders. In response to the feedback, single neurons emitted stereotypical spike trains whose temporal structure identified informative events with higher accuracy than mere spike count. The optimal decoding time scale was in the range of 70-200 ms, which is significantly shorter than the memory time scale required by the behavioral task. Importantly, the temporal spiking patterns of single units were predictive of the monkeys' behavioral response time. Furthermore, some features of these spiking patterns often varied between jointly recorded neurons. All together, our results suggest that dACC drives behavioral adaptation through complex spatiotemporal spike coding. They also indicate that downstream networks, which decode dACC feedback signals, are unlikely to act as mere neural integrators. PMID:26266537

  11. Improving Inpatient Surveys: Web-Based Computer Adaptive Testing Accessed via Mobile Phone QR Codes

    PubMed Central

    2016-01-01

    Background The National Health Service (NHS) 70-item inpatient questionnaire surveys inpatients on their perceptions of their hospitalization experience. However, it imposes more burden on the patient than other similar surveys. The literature shows that computerized adaptive testing (CAT) based on item response theory can help shorten the item length of a questionnaire without compromising its precision. Objective Our aim was to investigate whether CAT can be (1) efficient with item reduction and (2) used with quick response (QR) codes scanned by mobile phones. Methods After downloading the 2008 inpatient survey data from the Picker Institute Europe website and analyzing the difficulties of this 70-item questionnaire, we used an author-made Excel program using the Rasch partial credit model to simulate 1000 patients’ true scores followed by a standard normal distribution. The CAT was compared to two other scenarios of answering all items (AAI) and the randomized selection method (RSM), as we investigated item length (efficiency) and measurement accuracy. The author-made Web-based CAT program for gathering patient feedback was effectively accessed from mobile phones by scanning the QR code. Results We found that the CAT can be more efficient for patients answering questions (ie, fewer items to respond to) than either AAI or RSM without compromising its measurement accuracy. A Web-based CAT inpatient survey accessed by scanning a QR code on a mobile phone was viable for gathering inpatient satisfaction responses. Conclusions With advances in technology, patients can now be offered alternatives for providing feedback about hospitalization satisfaction. This Web-based CAT is a possible option in health care settings for reducing the number of survey items, as well as offering an innovative QR code access. PMID:26935793

  12. Adaptive coded spreading OFDM signal for dynamic-λ optical access network

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Zhang, Lijia; Xin, Xiangjun

    2015-12-01

    This paper proposes and experimentally demonstrates a novel adaptive coded spreading (ACS) orthogonal frequency division multiplexing (OFDM) signal for dynamic distributed optical ring-based access network. The wavelength can be assigned to different remote nodes (RNs) according to the traffic demand of optical network unit (ONU). The ACS can provide dynamic spreading gain to different signals according to the split ratio or transmission length, which offers flexible power budget for the network. A 10×13.12 Gb/s OFDM access with ACS is successfully demonstrated over two RNs and 120 km transmission in the experiment. The demonstrated method may be viewed as one promising for future optical metro access network.

  13. Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation

    NASA Technical Reports Server (NTRS)

    Locicero, J. L.; Schilling, D. L.

    1977-01-01

    An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.

  14. The NASPE/BPEG generic pacemaker code for antibradyarrhythmia and adaptive-rate pacing and antitachyarrhythmia devices.

    PubMed

    Bernstein, A D; Camm, A J; Fletcher, R D; Gold, R D; Rickards, A F; Smyth, N P; Spielman, S R; Sutton, R

    1987-07-01

    A new generic pacemaker code, derived from and compatible with the Revised ICHD Code, was proposed jointly by the North American Society of Pacing and Electrophysiology (NASPE) Mode Code Committee and the British Pacing and Electrophysiology Group (BPEG), and has been adopted by the NASPE Board of Trustees. It is abbreviated as the NBG (for "NASPE/BPEG Generic") Code, and was developed to permit extension of the generic-code concept to pacemakers whose escape rate is continuously controlled by monitoring some physiologic variable, rather than determined by fixed escape intervals measured from stimuli or sensed depolarizations, and to antitachyarrhythmia devices including cardioverters and defibrillators. The NASPE/BPEG Code incorporates an "R" in the fourth position to signify rate modulation (adaptive-rate pacing), and one of four letters in the fifth position to indicate the presence of antitachyarrhythmia-pacing capability or of cardioversion or defibrillation functions. PMID:2441363

  15. Blind Adaptive Decorrelating RAKE (DRAKE) Downlink Receiver for Space-Time Block Coded Multipath CDMA

    NASA Astrophysics Data System (ADS)

    Jayaweera, Sudharman K.; Poor, H. Vincent

    2003-12-01

    A downlink receiver is proposed for space-time block coded CDMA systems operating in multipath channels. By combining the powerful RAKE receiver concept for a frequency selective channel with space-time decoding, it is shown that the performance of mobile receivers operating in the presence of channel fading can be improved significantly. The proposed receiver consists of a bank of decorrelating filters designed to suppress the multiple access interference embedded in the received signal before the space-time decoding. The new receiver performs the space-time decoding along each resolvable multipath component and then the outputs are diversity combined to obtain the final decision statistic. The proposed receiver relies on a key constraint imposed on the output of each filter in the bank of decorrelating filters in order to maintain the space-time block code structure embedded in the signal. The proposed receiver can easily be adapted blindly, requiring only the desired user's signature sequence, which is also attractive in the context of wireless mobile communications. Simulation results are provided to confirm the effectiveness of the proposed receiver in multipath CDMA systems.

  16. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  17. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding.

    PubMed

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  18. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    PubMed Central

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-01-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications. PMID:27515908

  19. Hybrid threshold adaptable quantum secret sharing scheme with reverse Huffman-Fibonacci-tree coding

    NASA Astrophysics Data System (ADS)

    Lai, Hong; Zhang, Jun; Luo, Ming-Xing; Pan, Lei; Pieprzyk, Josef; Xiao, Fuyuan; Orgun, Mehmet A.

    2016-08-01

    With prevalent attacks in communication, sharing a secret between communicating parties is an ongoing challenge. Moreover, it is important to integrate quantum solutions with classical secret sharing schemes with low computational cost for the real world use. This paper proposes a novel hybrid threshold adaptable quantum secret sharing scheme, using an m-bonacci orbital angular momentum (OAM) pump, Lagrange interpolation polynomials, and reverse Huffman-Fibonacci-tree coding. To be exact, we employ entangled states prepared by m-bonacci sequences to detect eavesdropping. Meanwhile, we encode m-bonacci sequences in Lagrange interpolation polynomials to generate the shares of a secret with reverse Huffman-Fibonacci-tree coding. The advantages of the proposed scheme is that it can detect eavesdropping without joint quantum operations, and permits secret sharing for an arbitrary but no less than threshold-value number of classical participants with much lower bandwidth. Also, in comparison with existing quantum secret sharing schemes, it still works when there are dynamic changes, such as the unavailability of some quantum channel, the arrival of new participants and the departure of participants. Finally, we provide security analysis of the new hybrid quantum secret sharing scheme and discuss its useful features for modern applications.

  20. White Dwarf Mergers on Adaptive Meshes. I. Methodology and Code Verification

    NASA Astrophysics Data System (ADS)

    Katz, Max P.; Zingale, Michael; Calder, Alan C.; Swesty, F. Douglas; Almgren, Ann S.; Zhang, Weiqun

    2016-03-01

    The Type Ia supernova (SN Ia) progenitor problem is one of the most perplexing and exciting problems in astrophysics, requiring detailed numerical modeling to complement observations of these explosions. One possible progenitor that has merited recent theoretical attention is the white dwarf (WD) merger scenario, which has the potential to naturally explain many of the observed characteristics of SNe Ia. To date there have been relatively few self-consistent simulations of merging WD systems using mesh-based hydrodynamics. This is the first paper in a series describing simulations of these systems using a hydrodynamics code with adaptive mesh refinement. In this paper we describe our numerical methodology and discuss our implementation in the compressible hydrodynamics code CASTRO, which solves the Euler equations, and the Poisson equation for self-gravity, and couples the gravitational and rotation forces to the hydrodynamics. Standard techniques for coupling gravitation and rotation forces to the hydrodynamics do not adequately conserve the total energy of the system for our problem, but recent advances in the literature allow progress and we discuss our implementation here. We present a set of test problems demonstrating the extent to which our software sufficiently models a system where large amounts of mass are advected on the computational domain over long timescales. Future papers in this series will describe our treatment of the initial conditions of these systems and will examine the early phases of the merger to determine its viability for triggering a thermonuclear detonation.

  1. SIMULATING MAGNETOHYDRODYNAMICAL FLOW WITH CONSTRAINED TRANSPORT AND ADAPTIVE MESH REFINEMENT: ALGORITHMS AND TESTS OF THE AstroBEAR CODE

    SciTech Connect

    Cunningham, Andrew J.; Frank, Adam; Varniere, Peggy; Mitran, Sorin; Jones, Thomas W.

    2009-06-15

    A description is given of the algorithms implemented in the AstroBEAR adaptive mesh-refinement code for ideal magnetohydrodynamics. The code provides several high-resolution shock-capturing schemes which are constructed to maintain conserved quantities of the flow in a finite-volume sense. Divergence-free magnetic field topologies are maintained to machine precision by collating the components of the magnetic field on a cell-interface staggered grid and utilizing the constrained transport approach for integrating the induction equations. The maintenance of magnetic field topologies on adaptive grids is achieved using prolongation and restriction operators which preserve the divergence and curl of the magnetic field across collocated grids of different resolutions. The robustness and correctness of the code is demonstrated by comparing the numerical solution of various tests with analytical solutions or previously published numerical solutions obtained by other codes.

  2. Context adaptive binary arithmetic coding-based data hiding in partially encrypted H.264/AVC videos

    NASA Astrophysics Data System (ADS)

    Xu, Dawen; Wang, Rangding

    2015-05-01

    A scheme of data hiding directly in a partially encrypted version of H.264/AVC videos is proposed which includes three parts, i.e., selective encryption, data embedding and data extraction. Selective encryption is performed on context adaptive binary arithmetic coding (CABAC) bin-strings via stream ciphers. By careful selection of CABAC entropy coder syntax elements for selective encryption, the encrypted bitstream is format-compliant and has exactly the same bit rate. Then a data-hider embeds the additional data into partially encrypted H.264/AVC videos using a CABAC bin-string substitution technique without accessing the plaintext of the video content. Since bin-string substitution is carried out on those residual coefficients with approximately the same magnitude, the quality of the decrypted video is satisfactory. Video file size is strictly preserved even after data embedding. In order to adapt to different application scenarios, data extraction can be done either in the encrypted domain or in the decrypted domain. Experimental results have demonstrated the feasibility and efficiency of the proposed scheme.

  3. A New Real-coded Genetic Algorithm with an Adaptive Mating Selection for UV-landscapes

    NASA Astrophysics Data System (ADS)

    Oshima, Dan; Miyamae, Atsushi; Nagata, Yuichi; Kobayashi, Shigenobu; Ono, Isao; Sakuma, Jun

    The purpose of this paper is to propose a new real-coded genetic algorithm (RCGA) named Networked Genetic Algorithm (NGA) that intends to find multiple optima simultaneously in deceptive globally multimodal landscapes. Most current techniques such as niching for finding multiple optima take into account big valley landscapes or non-deceptive globally multimodal landscapes but not deceptive ones called UV-landscapes. Adaptive Neighboring Search (ANS) is a promising approach for finding multiple optima in UV-landscapes. ANS utilizes a restricted mating scheme with a crossover-like mutation in order to find optima in deceptive globally multimodal landscapes. However, ANS has a fundamental problem that it does not find all the optima simultaneously in many cases. NGA overcomes the problem by an adaptive parent-selection scheme and an improved crossover-like mutation. We show the effectiveness of NGA over ANS in terms of the number of detected optima in a single run on Fletcher and Powell functions as benchmark problems that are known to have multiple optima, ill-scaledness, and UV-landscapes.

  4. Fast multiple run_before decoding method for efficient implementation of an H.264/advanced video coding context-adaptive variable length coding decoder

    NASA Astrophysics Data System (ADS)

    Ki, Dae Wook; Kim, Jae Ho

    2013-07-01

    We propose a fast new multiple run_before decoding method in context-adaptive variable length coding (CAVLC). The transform coefficients are coded using CAVLC, in which the run_before symbols are generated for a 4×4 block input. To speed up the CAVLC decoding, the run_before symbols need to be decoded in parallel. We implemented a new CAVLC table for simultaneous decoding of up to three run_befores. The simulation results show a Total Speed-up Factor of 205%˜144% over various resolutions and quantization steps.

  5. Discrete adaptive zone light elements (DAZLE): a new approach to adaptive imaging

    NASA Astrophysics Data System (ADS)

    Kellogg, Robert L.; Escuti, Michael J.

    2007-09-01

    New advances in Liquid Crystal Spatial Light Modulators (LCSLM) offer opportunities for large adaptive optics in the midwave infrared spectrum. A light focusing adaptive imaging system, using the zero-order diffraction state of a polarizer-free liquid crystal polarization grating modulator to create millions of high transmittance apertures, is envisioned in a system called DAZLE (Discrete Adaptive Zone Light Elements). DAZLE adaptively selects large sets of LCSLM apertures using the principles of coded masks, embodied in a hybrid Discrete Fresnel Zone Plate (DFZP) design. Issues of system architecture, including factors of LCSLM aperture pattern and adaptive control, image resolution and focal plane array (FPA) matching, and trade-offs between filter bandwidths, background photon noise, and chromatic aberration are discussed.

  6. Robust image transmission using a new joint source channel coding algorithm and dual adaptive OFDM

    NASA Astrophysics Data System (ADS)

    Farshchian, Masoud; Cho, Sungdae; Pearlman, William A.

    2004-01-01

    In this paper we consider the problem of robust image coding and packetization for the purpose of communications over slow fading frequency selective channels and channels with a shaped spectrum like those of digital subscribe lines (DSL). Towards this end, a novel and analytically based joint source channel coding (JSCC) algorithm to assign unequal error protection is presented. Under a block budget constraint, the image bitstream is de-multiplexed into two classes with different error responses. The algorithm assigns unequal error protection (UEP) in a way to minimize the expected mean square error (MSE) at the receiver while minimizing the probability of catastrophic failure. In order to minimize the expected mean square error at the receiver, the algorithm assigns unequal protection to the value bit class (VBC) stream. In order to minimizes the probability of catastrophic error which is a characteristic of progressive image coders, the algorithm assigns more protection to the location bit class (LBC) stream than the VBC stream. Besides having the advantage of being analytical and also numerically solvable, the algorithm is based on a new formula developed to estimate the distortion rate (D-R) curve for the VBC portion of SPIHT. The major advantage of our technique is that the worst case instantaneous minimum peak signal to noise ratio (PSNR) does not differ greatly from the averge MSE while this is not the case for the optimal single stream (UEP) system. Although both average PSNR of our method and the optimal single stream UEP are about the same, our scheme does not suffer erratic behavior because we have made the probability of catastrophic error arbitarily small. The coded image is sent via orthogonal frequency division multiplexing (OFDM) which is a known and increasing popular modulation scheme to combat ISI (Inter Symbol Interference) and impulsive noise. Using dual adaptive energy OFDM, we use the minimum energy necessary to send each bit stream at a

  7. Reconstruction for distributed video coding: a Markov random field approach with context-adaptive smoothness prior

    NASA Astrophysics Data System (ADS)

    Zhang, Yongsheng; Xiong, Hongkai; He, Zhihai; Yu, Songyu

    2010-07-01

    An important issue in Wyner-Ziv video coding is the reconstruction of Wyner-Ziv frames with decoded bit-planes. So far, there are two major approaches: the Maximum a Posteriori (MAP) reconstruction and the Minimum Mean Square Error (MMSE) reconstruction algorithms. However, these approaches do not exploit smoothness constraints in natural images. In this paper, we model a Wyner-Ziv frame by Markov random fields (MRFs), and produce reconstruction results by finding an MAP estimation of the MRF model. In the MRF model, the energy function consists of two terms: a data term, MSE distortion metric in this paper, measuring the statistical correlation between side-information and the source, and a smoothness term enforcing spatial coherence. In order to better describe the spatial constraints of images, we propose a context-adaptive smoothness term by analyzing the correspondence between the output of Slepian-Wolf decoding and successive frames available at decoders. The significance of the smoothness term varies in accordance with the spatial variation within different regions. To some extent, the proposed approach is an extension to the MAP and MMSE approaches by exploiting the intrinsic smoothness characteristic of natural images. Experimental results demonstrate a considerable performance gain compared with the MAP and MMSE approaches.

  8. Bandwidth reduction of high-frequency sonar imagery in shallow water using content-adaptive hybrid image coding

    NASA Astrophysics Data System (ADS)

    Shin, Frances B.; Kil, David H.

    1998-09-01

    One of the biggest challenges in distributed underwater mine warfare for area sanitization and safe power projection during regional conflicts is transmission of compressed raw imagery data to a central processing station via a limited bandwidth channel while preserving crucial target information for further detection and automatic target recognition processing. Moreover, operating in an extremely shallow water with fluctuating channels and numerous interfering sources makes it imperative that image compression algorithms effectively deal with background nonstationarity within an image as well as content variation between images. In this paper, we present a novel approach to lossy image compression that combines image- content classification, content-adaptive bit allocation, and hybrid wavelet tree-based coding for over 100:1 bandwidth reduction with little sacrifice in signal-to-noise ratio (SNR). Our algorithm comprises (1) content-adaptive coding that takes advantage of a classify-before-coding strategy to reduce data mismatch, (2) subimage transformation for energy compaction, and (3) a wavelet tree-based coding for efficient encoding of significant wavelet coefficients. Furthermore, instead of using the embedded zerotree coding with scalar quantization (SQ), we investigate the use of a hybrid coding strategy that combines SQ for high-magnitude outlier transform coefficients and classified vector quantization (CVQ) for compactly clustered coefficients. This approach helps us achieve reduced distortion error and robustness while achieving high compression ratio. Our analysis based on the high-frequency sonar real data that exhibit severe content variability and contain both mines and mine-like clutter indicates that we can achieve over 100:1 compression ratio without losing crucial signal attributes. In comparison, benchmarking of the same data set with the best still-picture compression algorithm called the set partitioning in hierarchical trees (SPIHT) reveals

  9. Adaptive quarter-pel motion estimation and motion vector coding algorithm for the H.264/AVC standard

    NASA Astrophysics Data System (ADS)

    Jung, Seung-Won; Park, Chun-Su; Ha, Le Thanh; Ko, Sung-Jea

    2009-11-01

    We present an adaptive quarter-pel (Qpel) motion estimation (ME) method for H.264/AVC. Instead of applying Qpel ME to all macroblocks (MBs), the proposed method selectively performs Qpel ME in an MB level. In order to reduce the bit rate, we also propose a motion vector (MV) encoding technique that adaptively selects a different variable length coding (VLC) table according to the accuracy of the MV. Experimental results show that the proposed method can achieve about 3% average bit rate reduction.

  10. Adaptive mesh simulations of astrophysical detonations using the ASCI flash code

    NASA Astrophysics Data System (ADS)

    Fryxell, B.; Calder, A. C.; Dursi, L. J.; Lamb, D. Q.; MacNeice, P.; Olson, K.; Ricker, P.; Rosner, R.; Timmes, F. X.; Truran, J. W.; Tufo, H. M.; Zingale, M.

    2001-08-01

    The Flash code was developed at the University of Chicago as part of the Department of Energy's Accelerated Strategic Computing Initiative (ASCI). The code was designed specifically to simulate thermonuclear flashes in compact stars (white dwarfs and neutron stars). This paper will give a brief introduction to the astrophysics problems we wish to address, followed by a description of the current version of the Flash code. Finally, we discuss two simulations of astrophysical detonations that we have carried out with the code. The first is of a helium detonation in an X-ray burst. The other simulation models a carbon detonation in a Type Ia supernova explosion. .

  11. A Peak Power Reduction Method with Adaptive Inversion of Clustered Parity-Carriers in BCH-Coded OFDM Systems

    NASA Astrophysics Data System (ADS)

    Muta, Osamu; Akaiwa, Yoshihiko

    In this paper, we propose a simple peak power reduction (PPR) method based on adaptive inversion of parity-check block of codeword in BCH-coded OFDM system. In the proposed method, the entire parity-check block of the codeword is adaptively inversed by multiplying weighting factors (WFs) so as to minimize PAPR of the OFDM signal, symbol-by-symbol. At the receiver, these WFs are estimated based on the property of BCH decoding. When the primitive BCH code with single error correction such as (31,26) code is used, to estimate the WFs, the proposed method employs a significant bit protection method which assigns a significant bit to the best subcarrier selected among all possible subcarriers. With computer simulation, when (31,26), (31,21) and (32,21) BCH codes are employed, PAPR of the OFDM signal at the CCDF (Complementary Cumulative Distribution Function) of 10-4 is reduced by about 1.9, 2.5 and 2.5dB by applying the PPR method, while achieving the BER performance comparable to the case with the perfect WF estimation in exponentially decaying 12-path Rayleigh fading condition.

  12. Rate-adaptive FSO links over atmospheric turbulence channels by jointly using repetition coding and silence periods.

    PubMed

    García-Zambrana, Antonio; Castillo-Vázquez, Carmen; Castillo-Vázquez, Beatriz

    2010-11-22

    In this paper, a new and simple rate-adaptive transmission scheme for free-space optical (FSO) communication systems with intensity modulation and direct detection (IM/DD) over atmospheric turbulence channels is analyzed. This scheme is based on the joint use of repetition coding and variable silence periods, exploiting the potential time-diversity order (TDO) available in the turbulent channel as well as allowing the increase of the peak-to-average optical power ratio (PAOPR). Here, repetition coding is firstly used in order to accommodate the transmission rate to the channel conditions until the whole time diversity order available in the turbulent channel by interleaving is exploited. Then, once no more diversity gain is available, the rate reduction can be increased by using variable silence periods in order to increase the PAOPR. Novel closed-form expressions for the average bit-error rate (BER) as well as their corresponding asymptotic expressions are presented when the irradiance of the transmitted optical beam follows negative exponential and gamma-gamma distributions, covering a wide range of atmospheric turbulence conditions. Obtained results show a diversity order as in the corresponding rate-adaptive transmission scheme only based on repetition codes but providing a relevant improvement in coding gain. Simulation results are further demonstrated to confirm the analytical results. Here, not only rectangular pulses are considered but also OOK formats with any pulse shape, corroborating the advantage of using pulses with high PAOPR, such as gaussian or squared hyperbolic secant pulses. We also determine the achievable information rate for the rate-adaptive transmission schemes here analyzed.

  13. Multi-level adaptive particle mesh (MLAPM): a c code for cosmological simulations

    NASA Astrophysics Data System (ADS)

    Knebe, Alexander; Green, Andrew; Binney, James

    2001-08-01

    We present a computer code written in c that is designed to simulate structure formation from collisionless matter. The code is purely grid-based and uses a recursively refined Cartesian grid to solve Poisson's equation for the potential, rather than obtaining the potential from a Green's function. Refinements can have arbitrary shapes and in practice closely follow the complex morphology of the density field that evolves. The time-step shortens by a factor of 2 with each successive refinement. Competing approaches to N-body simulation are discussed from the point of view of the basic theory of N-body simulation. It is argued that an appropriate choice of softening length ɛ is of great importance and that ɛ should be at all points an appropriate multiple of the local interparticle separation. Unlike tree and P3M codes, multigrid codes automatically satisfy this requirement. We show that at early times and low densities in cosmological simulations, ɛ needs to be significantly smaller relative to the interparticle separation than in virialized regions. Tests of the ability of the code's Poisson solver to recover the gravitational fields of both virialized haloes and Zel'dovich waves are presented, as are tests of the code's ability to reproduce analytic solutions for plane-wave evolution. The times required to conduct a ΛCDM cosmological simulation for various configurations are compared with the times required to complete the same simulation with the ART, AP3M and GADGET codes. The power spectra, halo mass functions and halo-halo correlation functions of simulations conducted with different codes are compared. The code is available from http://www-thphys.physics.ox.ac.uk/users/MLAPM.

  14. Anti-Voice Adaptation Suggests Prototype-Based Coding of Voice Identity

    PubMed Central

    Latinus, Marianne; Belin, Pascal

    2011-01-01

    We used perceptual aftereffects induced by adaptation with anti-voice stimuli to investigate voice identity representations. Participants learned a set of voices then were tested on a voice identification task with vowel stimuli morphed between identities, after different conditions of adaptation. In Experiment 1, participants chose the identity opposite to the adapting anti-voice significantly more often than the other two identities (e.g., after being adapted to anti-A, they identified the average voice as A). In Experiment 2, participants showed a bias for identities opposite to the adaptor specifically for anti-voice, but not for non-anti-voice adaptors. These results are strikingly similar to adaptation aftereffects observed for facial identity. They are compatible with a representation of individual voice identities in a multidimensional perceptual voice space referenced on a voice prototype. PMID:21847384

  15. Reading the second code: mapping epigenomes to understand plant growth, development, and adaptation to the environment.

    PubMed

    2012-06-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual's set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of "epigenetic" layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature's second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution.

  16. Debuncher Momentum Aperture Measurements

    SciTech Connect

    O'Day, S.

    1991-01-01

    During the November 1990 through January 1991 {bar p} studies period, the momentum aperture of the beam in the debuncher ring was measured. The momentum aperture ({Delta}p/p) was found to be 4.7%. The momentum spread was also measured with beam bunch rotation off. A nearly constant particle population density was observed for particles with {Delta}p/p of less than 4.3%, indicating virtually unobstructed orbits in this region. The population of particles with momenta outside this aperture was found to decrease rapidly. An absolute or 'cut-off' momentum aperture of {Delta}p/p = 5.50% was measured.

  17. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females.

  18. Variable-aperture screen

    DOEpatents

    Savage, George M.

    1991-01-01

    Apparatus for separating material into first and second portions according to size including a plurality of shafts, a plurality of spaced disks radiating outwardly from each of the shafts to define apertures and linkage interconnecting the shafts for moving the shafts toward or away from one another to vary the size of the apertures while the apparatus is performing the separating function.

  19. Rotating Aperture System

    DOEpatents

    Rusnak, Brian; Hall, James M.; Shen, Stewart; Wood, Richard L.

    2005-01-18

    A rotating aperture system includes a low-pressure vacuum pumping stage with apertures for passage of a deuterium beam. A stator assembly includes holes for passage of the beam. The rotor assembly includes a shaft connected to a deuterium gas cell or a crossflow venturi that has a single aperture on each side that together align with holes every rotation. The rotating apertures are synchronized with the firing of the deuterium beam such that the beam fires through a clear aperture and passes into the Xe gas beam stop. Portions of the rotor are lapped into the stator to improve the sealing surfaces, to prevent rapid escape of the deuterium gas from the gas cell.

  20. TELESCOPES: Astronomers Overcome 'Aperture Envy'.

    PubMed

    Irion, R

    2000-07-01

    Many users of small telescopes are disturbed by the trend of shutting down smaller instruments in order to help fund bigger and bolder ground-based telescopes. Small telescopes can thrive in the shadow of giant new observatories, they say--but only if they are adapted to specialized projects. Telescopes with apertures of 2 meters or less have unique abilities to monitor broad swaths of the sky and stare at the same objects night after night, sometimes for years; various teams are turning small telescopes into robots, creating networks that span the globe and devoting them to survey projects that big telescopes don't have a prayer of tackling. PMID:17832960

  1. IMPROVEMENTS IN CODED APERTURE THERMAL NEUTRON IMAGING.

    SciTech Connect

    VANIER,P.E.

    2003-08-03

    A new thermal neutron imaging system has been constructed, based on a 20-cm x 17-cm He-3 position-sensitive detector with spatial resolution better than 1 mm. New compact custom-designed position-decoding electronics are employed, as well as high-precision cadmium masks with Modified Uniformly Redundant Array patterns. Fast Fourier Transform algorithms are incorporated into the deconvolution software to provide rapid conversion of shadowgrams into real images. The system demonstrates the principles for locating sources of thermal neutrons by a stand-off technique, as well as visualizing the shapes of nearby sources. The data acquisition time could potentially be reduced two orders of magnitude by building larger detectors.

  2. Simulation of Supersonic Jet Noise with the Adaptation of Overflow CFD Code and Kirchhoff Surface Integral

    NASA Technical Reports Server (NTRS)

    Kandula, Max; Caimi, Raoul; Steinrock, T. (Technical Monitor)

    2001-01-01

    An acoustic prediction capability for supersonic axisymmetric jets was developed on the basis of OVERFLOW Navier-Stokes CFD (Computational Fluid Dynamics) code of NASA Langley Research Center. Reynolds-averaged turbulent stresses in the flow field are modeled with the aid of Spalart-Allmaras one-equation turbulence model. Appropriate acoustic and outflow boundary conditions were implemented to compute time-dependent acoustic pressure in the nonlinear source-field. Based on the specification of acoustic pressure, its temporal and normal derivatives on the Kirchhoff surface, the near-field and the far-field sound pressure levels are computed via Kirchhoff surface integral, with the Kirchhoff surface chosen to enclose the nonlinear sound source region described by the CFD code. The methods are validated by a comparison of the predictions of sound pressure levels with the available data for an axisymmetric turbulent supersonic (Mach 2) perfectly expanded jet.

  3. Integer-linear-programing optimization in scalable video multicast with adaptive modulation and coding in wireless networks.

    PubMed

    Lee, Dongyul; Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm.

  4. Integer-Linear-Programing Optimization in Scalable Video Multicast with Adaptive Modulation and Coding in Wireless Networks

    PubMed Central

    Lee, Chaewoo

    2014-01-01

    The advancement in wideband wireless network supports real time services such as IPTV and live video streaming. However, because of the sharing nature of the wireless medium, efficient resource allocation has been studied to achieve a high level of acceptability and proliferation of wireless multimedia. Scalable video coding (SVC) with adaptive modulation and coding (AMC) provides an excellent solution for wireless video streaming. By assigning different modulation and coding schemes (MCSs) to video layers, SVC can provide good video quality to users in good channel conditions and also basic video quality to users in bad channel conditions. For optimal resource allocation, a key issue in applying SVC in the wireless multicast service is how to assign MCSs and the time resources to each SVC layer in the heterogeneous channel condition. We formulate this problem with integer linear programming (ILP) and provide numerical results to show the performance under 802.16 m environment. The result shows that our methodology enhances the overall system throughput compared to an existing algorithm. PMID:25276862

  5. Perceiving Affordances for Fitting through Apertures

    ERIC Educational Resources Information Center

    Ishak, Shaziela; Adolph, Karen E.; Lin, Grace C.

    2008-01-01

    Affordances--possibilities for action--are constrained by the match between actors and their environments. For motor decisions to be adaptive, affordances must be detected accurately. Three experiments examined the correspondence between motor decisions and affordances as participants reached through apertures of varying size. A psychophysical…

  6. Sub-Aperture Interferometers

    NASA Technical Reports Server (NTRS)

    Zhao, Feng

    2010-01-01

    Sub-aperture interferometers -- also called wavefront-split interferometers -- have been developed for simultaneously measuring displacements of multiple targets. The terms "sub-aperture" and "wavefront-split" signify that the original measurement light beam in an interferometer is split into multiple sub-beams derived from non-overlapping portions of the original measurement-beam aperture. Each measurement sub-beam is aimed at a retroreflector mounted on one of the targets. The splitting of the measurement beam is accomplished by use of truncated mirrors and masks, as shown in the example below

  7. Bistatic synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Yates, Gillian

    Synthetic aperture radar (SAR) allows all-weather, day and night, surface surveillance and has the ability to detect, classify and geolocate objects at long stand-off ranges. Bistatic SAR, where the transmitter and the receiver are on separate platforms, is seen as a potential means of countering the vulnerability of conventional monostatic SAR to electronic countermeasures, particularly directional jamming, and avoiding physical attack of the imaging platform. As the receiving platform can be totally passive, it does not advertise its position by RF emissions. The transmitter is not susceptible to jamming and can, for example, operate at long stand-off ranges to reduce its vulnerability to physical attack. This thesis examines some of the complications involved in producing high-resolution bistatic SAR imagery. The effect of bistatic operation on resolution is examined from a theoretical viewpoint and analytical expressions for resolution are developed. These expressions are verified by simulation work using a simple 'point by point' processor. This work is extended to look at using modern practical processing engines for bistatic geometries. Adaptations of the polar format algorithm and range migration algorithm are considered. The principal achievement of this work is a fully airborne demonstration of bistatic SAR. The route taken in reaching this is given, along with some results. The bistatic SAR imagery is analysed and compared to the monostatic imagery collected at the same time. Demonstrating high-resolution bistatic SAR imagery using two airborne platforms represents what I believe to be a European first and is likely to be the first time that this has been achieved outside the US (the UK has very little insight into US work on this topic). Bistatic target characteristics are examined through the use of simulations. This also compares bistatic imagery with monostatic and gives further insight into the utility of bistatic SAR.

  8. fMR-Adaptation Reveals Invariant Coding of Biological Motion on the Human STS

    PubMed Central

    Grossman, Emily D.; Jardine, Nicole L.; Pyles, John A.

    2009-01-01

    Neuroimaging studies of biological motion perception have found a network of coordinated brain areas, the hub of which appears to be the human posterior superior temporal sulcus (STSp). Understanding the functional role of the STSp requires characterizing the response tuning of neuronal populations underlying the BOLD response. Thus far our understanding of these response properties comes from single-unit studies of the monkey anterior STS, which has individual neurons tuned to body actions, with a small population invariant to changes in viewpoint, position and size of the action being viewed. To measure for homologous functional properties on the human STS, we used fMR-adaptation to investigate action, position and size invariance. Observers viewed pairs of point-light animations depicting human actions that were either identical, differed in the action depicted, locally scrambled, or differed in the viewing perspective, the position or the size. While extrastriate hMT+ had neural signals indicative of viewpoint specificity, the human STS adapted for all of these changes, as compared to viewing two different actions. Similar findings were observed in more posterior brain areas also implicated in action recognition. Our findings are evidence for viewpoint invariance in the human STS and related brain areas, with the implication that actions are abstracted into object-centered representations during visual analysis. PMID:20431723

  9. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods. PMID:27448359

  10. Query-Adaptive Hash Code Ranking for Large-Scale Multi-View Visual Search.

    PubMed

    Liu, Xianglong; Huang, Lei; Deng, Cheng; Lang, Bo; Tao, Dacheng

    2016-10-01

    Hash-based nearest neighbor search has become attractive in many applications. However, the quantization in hashing usually degenerates the discriminative power when using Hamming distance ranking. Besides, for large-scale visual search, existing hashing methods cannot directly support the efficient search over the data with multiple sources, and while the literature has shown that adaptively incorporating complementary information from diverse sources or views can significantly boost the search performance. To address the problems, this paper proposes a novel and generic approach to building multiple hash tables with multiple views and generating fine-grained ranking results at bitwise and tablewise levels. For each hash table, a query-adaptive bitwise weighting is introduced to alleviate the quantization loss by simultaneously exploiting the quality of hash functions and their complement for nearest neighbor search. From the tablewise aspect, multiple hash tables are built for different data views as a joint index, over which a query-specific rank fusion is proposed to rerank all results from the bitwise ranking by diffusing in a graph. Comprehensive experiments on image search over three well-known benchmarks show that the proposed method achieves up to 17.11% and 20.28% performance gains on single and multiple table search over the state-of-the-art methods.

  11. A novel pseudoderivative-based mutation operator for real-coded adaptive genetic algorithms

    PubMed Central

    Kanwal, Maxinder S; Ramesh, Avinash S; Huang, Lauren A

    2013-01-01

    Recent development of large databases, especially those in genetics and proteomics, is pushing the development of novel computational algorithms that implement rapid and accurate search strategies. One successful approach has been to use artificial intelligence and methods, including pattern recognition (e.g. neural networks) and optimization techniques (e.g. genetic algorithms). The focus of this paper is on optimizing the design of genetic algorithms by using an adaptive mutation rate that is derived from comparing the fitness values of successive generations. We propose a novel pseudoderivative-based mutation rate operator designed to allow a genetic algorithm to escape local optima and successfully continue to the global optimum. Once proven successful, this algorithm can be implemented to solve real problems in neurology and bioinformatics. As a first step towards this goal, we tested our algorithm on two 3-dimensional surfaces with multiple local optima, but only one global optimum, as well as on the N-queens problem, an applied problem in which the function that maps the curve is implicit. For all tests, the adaptive mutation rate allowed the genetic algorithm to find the global optimal solution, performing significantly better than other search methods, including genetic algorithms that implement fixed mutation rates. PMID:24627784

  12. Variable-aperture screen

    DOEpatents

    Savage, G.M.

    1991-10-29

    Apparatus is described for separating material into first and second portions according to size including a plurality of shafts, a plurality of spaced disks radiating outwardly from each of the shafts to define apertures and linkage interconnecting the shafts for moving the shafts toward or away from one another to vary the size of the apertures while the apparatus is performing the separating function. 10 figures.

  13. APT: Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ

    2012-08-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It has a graphical user interface (GUI) which allows the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. Mouse-clicking on a source in the displayed image draws a circular or elliptical aperture and sky annulus around the source and computes the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs, including image histogram, and aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has functions for customizing calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  14. Parallelization of GeoClaw code for modeling geophysical flows with adaptive mesh refinement on many-core systems

    USGS Publications Warehouse

    Zhang, S.; Yuen, D.A.; Zhu, A.; Song, S.; George, D.L.

    2011-01-01

    We parallelized the GeoClaw code on one-level grid using OpenMP in March, 2011 to meet the urgent need of simulating tsunami waves at near-shore from Tohoku 2011 and achieved over 75% of the potential speed-up on an eight core Dell Precision T7500 workstation [1]. After submitting that work to SC11 - the International Conference for High Performance Computing, we obtained an unreleased OpenMP version of GeoClaw from David George, who developed the GeoClaw code as part of his PH.D thesis. In this paper, we will show the complementary characteristics of the two approaches used in parallelizing GeoClaw and the speed-up obtained by combining the advantage of each of the two individual approaches with adaptive mesh refinement (AMR), demonstrating the capabilities of running GeoClaw efficiently on many-core systems. We will also show a novel simulation of the Tohoku 2011 Tsunami waves inundating the Sendai airport and Fukushima Nuclear Power Plants, over which the finest grid distance of 20 meters is achieved through a 4-level AMR. This simulation yields quite good predictions about the wave-heights and travel time of the tsunami waves. ?? 2011 IEEE.

  15. First Clinical Release of an Online, Adaptive, Aperture-Based Image-Guided Radiotherapy Strategy in Intensity-Modulated Radiotherapy to Correct for Inter- and Intrafractional Rotations of the Prostate

    SciTech Connect

    Deutschmann, Heinz; Kametriser, Gerhard; Steininger, Philipp; Scherer, Philipp; Schoeller, Helmut; Gaisberger, Christoph; Mooslechner, Michaela; Mitterlechner, Bernhard; Weichenberger, Harald; Fastner, Gert; Wurstbauer, Karl; Jeschke, Stephan; Forstner, Rosemarie; Sedlmayer, Felix

    2012-08-01

    Purpose: We developed and evaluated a correction strategy for prostate rotations using direct adaptation of segments in intensity-modulated radiotherapy (IMRT). Method and Materials: Implanted fiducials (four gold markers) were used to determine interfractional translations, rotations, and dilations of the prostate. We used hybrid imaging: The markers were automatically detected in two pretreatment planar X-ray projections; their actual position in three-dimensional space was reconstructed from these images at first. The structure set comprising prostate, seminal vesicles, and adjacent rectum wall was transformed accordingly in 6 degrees of freedom. Shapes of IMRT segments were geometrically adapted in a class solution forward-planning approach, derived within seconds on-site and treated immediately. Intrafractional movements were followed in MV electronic portal images captured on the fly. Results: In 31 of 39 patients, for 833 of 1013 fractions (supine, flat couch, knee support, comfortably full bladder, empty rectum, no intraprostatic marker migrations >2 mm of more than one marker), the online aperture adaptation allowed safe reduction of margins clinical target volume-planning target volume (prostate) down to 5 mm when only interfractional corrections were applied: Dominant L-R rotations were found to be 5.3 Degree-Sign (mean of means), standard deviation of means {+-}4.9 Degree-Sign , maximum at 30.7 Degree-Sign . Three-dimensional vector translations relative to skin markings were 9.3 {+-} 4.4 mm (maximum, 23.6 mm). Intrafractional movements in 7.7 {+-} 1.5 min (maximum, 15.1 min) between kV imaging and last beam's electronic portal images showed further L-R rotations of 2.5 Degree-Sign {+-} 2.3 Degree-Sign (maximum, 26.9 Degree-Sign ), and three-dimensional vector translations of 3.0 {+-}3.7 mm (maximum, 10.2 mm). Addressing intrafractional errors could further reduce margins to 3 mm. Conclusion: We demonstrated the clinical feasibility of an online

  16. An Adaptive Data Gathering Scheme for Multi-Hop Wireless Sensor Networks Based on Compressed Sensing and Network Coding

    PubMed Central

    Yin, Jun; Yang, Yuwang; Wang, Lei

    2016-01-01

    Joint design of compressed sensing (CS) and network coding (NC) has been demonstrated to provide a new data gathering paradigm for multi-hop wireless sensor networks (WSNs). By exploiting the correlation of the network sensed data, a variety of data gathering schemes based on NC and CS (Compressed Data Gathering—CDG) have been proposed. However, these schemes assume that the sparsity of the network sensed data is constant and the value of the sparsity is known before starting each data gathering epoch, thus they ignore the variation of the data observed by the WSNs which are deployed in practical circumstances. In this paper, we present a complete design of the feedback CDG scheme where the sink node adaptively queries those interested nodes to acquire an appropriate number of measurements. The adaptive measurement-formation procedure and its termination rules are proposed and analyzed in detail. Moreover, in order to minimize the number of overall transmissions in the formation procedure of each measurement, we have developed a NP-complete model (Maximum Leaf Nodes Minimum Steiner Nodes—MLMS) and realized a scalable greedy algorithm to solve the problem. Experimental results show that the proposed measurement-formation method outperforms previous schemes, and experiments on both datasets from ocean temperature and practical network deployment also prove the effectiveness of our proposed feedback CDG scheme. PMID:27043574

  17. Differential Synthetic Aperture Ladar

    SciTech Connect

    Stappaerts, E A; Scharlemann, E

    2005-02-07

    We report a differential synthetic aperture ladar (DSAL) concept that relaxes platform and laser requirements compared to conventional SAL. Line-of-sight translation/vibration constraints are reduced by several orders of magnitude, while laser frequency stability is typically relaxed by an order of magnitude. The technique is most advantageous for shorter laser wavelengths, ultraviolet to mid-infrared. Analytical and modeling results, including the effect of speckle and atmospheric turbulence, are presented. Synthetic aperture ladars are of growing interest, and several theoretical and experimental papers have been published on the subject. Compared to RF synthetic aperture radar (SAR), platform/ladar motion and transmitter bandwidth constraints are especially demanding at optical wavelengths. For mid-IR and shorter wavelengths, deviations from a linear trajectory along the synthetic aperture length have to be submicron, or their magnitude must be measured to that precision for compensation. The laser coherence time has to be the synthetic aperture transit time, or transmitter phase has to be recorded and a correction applied on detection.

  18. Adaptive Code Division Multiple Access Protocol for Wireless Network-on-Chip Architectures

    NASA Astrophysics Data System (ADS)

    Vijayakumaran, Vineeth

    Massive levels of integration following Moore's Law ushered in a paradigm shift in the way on-chip interconnections were designed. With higher and higher number of cores on the same die traditional bus based interconnections are no longer a scalable communication infrastructure. On-chip networks were proposed enabled a scalable plug-and-play mechanism for interconnecting hundreds of cores on the same chip. Wired interconnects between the cores in a traditional Network-on-Chip (NoC) system, becomes a bottleneck with increase in the number of cores thereby increasing the latency and energy to transmit signals over them. Hence, there has been many alternative emerging interconnect technologies proposed, namely, 3D, photonic and multi-band RF interconnects. Although they provide better connectivity, higher speed and higher bandwidth compared to wired interconnects; they also face challenges with heat dissipation and manufacturing difficulties. On-chip wireless interconnects is one other alternative proposed which doesn't need physical interconnection layout as data travels over the wireless medium. They are integrated into a hybrid NOC architecture consisting of both wired and wireless links, which provides higher bandwidth, lower latency, lesser area overhead and reduced energy dissipation in communication. However, as the bandwidth of the wireless channels is limited, an efficient media access control (MAC) scheme is required to enhance the utilization of the available bandwidth. This thesis proposes using a multiple access mechanism such as Code Division Multiple Access (CDMA) to enable multiple transmitter-receiver pairs to send data over the wireless channel simultaneously. It will be shown that such a hybrid wireless NoC with an efficient CDMA based MAC protocol can significantly increase the performance of the system while lowering the energy dissipation in data transfer. In this work it is shown that the wireless NoC with the proposed CDMA based MAC protocol

  19. Advanced Multiple Aperture Seeing Profiler

    NASA Astrophysics Data System (ADS)

    Ren, Deqing; Zhao, Gang

    2016-10-01

    Measurements of the seeing profile of the atmospheric turbulence as a function of altitude are crucial for solar astronomical site characterization, as well as the optimized design and performance estimation of solar Multi-Conjugate Adaptive Optics (MCAO). Knowledge of the seeing distribution, up to 30 km, with a potential new solar observation site, is required for future solar MCAO developments. Current optical seeing profile measurement techniques are limited by the need to use a large facility solar telescope for such seeing profile measurements, which is a serious limitation on characterizing a site's seeing conditions in terms of the seeing profile. Based on our previous work, we propose a compact solar seeing profiler called the Advanced Multiple Aperture Seeing Profile (A-MASP). A-MASP consists of two small telescopes, each with a 100 mm aperture. The two small telescopes can be installed on a commercial computerized tripod to track solar granule structures for seeing profile measurement. A-MASP is extreme simple and portable, which makes it an ideal system to bring to a potential new site for seeing profile measurements.

  20. Optical synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Asaf; Zach, Shlomo; Zalevsky, Zeev

    2013-06-01

    A method is proposed for increasing the resolution of an object and overcoming the diffraction limit of an optical system installed on top of a moving imaging system, such as an airborne platform or satellite. The resolution improvement is obtained via a two-step process. First, three low resolution differently defocused images are captured and the optical phase is retrieved using an improved iterative Gershberg-Saxton based algorithm. The phase retrieval allows numerical back propagation of the field to the aperture plane. Second, the imaging system is shifted and the first step is repeated. The obtained optical fields at the aperture plane are combined and a synthetically increased lens aperture is generated along the direction of movement, yielding higher imaging resolution. The method resembles a well-known approach from the microwave regime called the synthetic aperture radar in which the antenna size is synthetically increased along the platform propagation direction. The proposed method is demonstrated via Matlab simulation as well as through laboratory experiment.

  1. Apodizer aperture for lasers

    DOEpatents

    Jorna, Siebe; Siebert, Larry D.; Brueckner, Keith A.

    1976-11-09

    An aperture attenuator for use with high power lasers which includes glass windows shaped and assembled to form an annulus chamber which is filled with a dye solution. The annulus chamber is shaped such that the section in alignment with the axis of the incident beam follows a curve which is represented by the equation y = (r - r.sub.o).sup.n.

  2. Synthetic Aperture Radar Interferometry

    NASA Technical Reports Server (NTRS)

    Rosen, P. A.; Hensley, S.; Joughin, I. R.; Li, F.; Madsen, S. N.; Rodriguez, E.; Goldstein, R. M.

    1998-01-01

    Synthetic aperture radar interferometry is an imaging technique for measuring the topography of a surface, its changes over time, and other changes in the detailed characteristics of the surface. This paper reviews the techniques of interferometry, systems and limitations, and applications in a rapidly growing area of science and engineering.

  3. Fast subpel motion estimation for H.264/advanced video coding with an adaptive motion vector accuracy decision

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Jung, Bongsoo; Jung, Jooyoung; Jeon, Byeungwoo

    2012-11-01

    The quarter-pel motion vector accuracy supported by H.264/advanced video coding (AVC) in motion estimation (ME) and compensation (MC) provides high compression efficiency. However, it also increases the computational complexity. While various well-known fast integer-pel ME methods are already available, lack of a good, fast subpel ME method results in problems associated with relatively high computational complexity. This paper presents one way of solving the complexity problem of subpel ME by making adaptive motion vector (MV) accuracy decisions in inter-mode selection. The proposed MV accuracy decision is made using inter-mode selection of a macroblock with two decision criteria. Pixels are classified as stationary (and/or homogeneous) or nonstationary (and/or nonhomogeneous). In order to avoid unnecessary interpolation and processing, a proper subpel ME level is chosen among four different combinations, each of which has a different MV accuracy and number of subpel ME iterations based on the classification. Simulation results using an open source x264 software encoder show that without any noticeable degradation (by -0.07 dB on average), the proposed method reduces total encoding time and subpel ME time, respectively, by 51.78% and by 76.49% on average, as compared to the conventional full-pel pixel search.

  4. Vision: Efficient Adaptive Coding

    PubMed Central

    Burr, David; Cicchini, Guido Marco

    2016-01-01

    Recent studies show that perception is driven not only by the stimuli currently impinging on our senses, but also by the immediate past history. The influence of recent perceptual history on the present reflects the action of efficient mechanisms that exploit temporal redundancies in natural scenes. PMID:25458222

  5. Adapt

    NASA Astrophysics Data System (ADS)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  6. Aperture center energy showcase

    SciTech Connect

    Torres, J. J.

    2012-03-01

    Sandia and Forest City have established a Cooperative Research and Development Agreement (CRADA), and the partnership provides a unique opportunity to take technology research and development from demonstration to application in a sustainable community. A project under that CRADA, Aperture Center Energy Showcase, offers a means to develop exhibits and demonstrations that present feedback to community members, Sandia customers, and visitors. The technologies included in the showcase focus on renewable energy and its efficiency, and resilience. These technologies are generally scalable, and provide secure, efficient solutions to energy production, delivery, and usage. In addition to establishing an Energy Showcase, support offices and conference capabilities that facilitate research, collaboration, and demonstration were created. The Aperture Center project focuses on establishing a location that provides outreach, awareness, and demonstration of research findings, emerging technologies, and project developments to Sandia customers, visitors, and Mesa del Sol community members.

  7. Configurable Aperture Space Telescope

    NASA Technical Reports Server (NTRS)

    Ennico, Kimberly; Bendek, Eduardo

    2015-01-01

    In December 2014, we were awarded Center Innovation Fund to evaluate an optical and mechanical concept for a novel implementation of a segmented telescope based on modular, interconnected small sats (satlets). The concept is called CAST, a Configurable Aperture Space Telescope. With a current TRL is 2 we will aim to reach TLR 3 in Sept 2015 by demonstrating a 2x2 mirror system to validate our optical model and error budget, provide straw man mechanical architecture and structural damping analyses, and derive future satlet-based observatory performance requirements. CAST provides an alternative access to visible and/or UV wavelength space telescope with 1-meter or larger aperture for NASA SMD Astrophysics and Planetary Science community after the retirement of HST

  8. Integrated electrochromic aperture diaphragm

    NASA Astrophysics Data System (ADS)

    Deutschmann, T.; Oesterschulze, E.

    2014-05-01

    In the last years, the triumphal march of handheld electronics with integrated cameras has opened amazing fields for small high performing optical systems. For this purpose miniaturized iris apertures are of practical importance because they are essential to control both the dynamic range of the imaging system and the depth of focus. Therefore, we invented a micro optical iris based on an electrochromic (EC) material. This material changes its absorption in response to an applied voltage. A coaxial arrangement of annular rings of the EC material is used to establish an iris aperture without need of any mechanical moving parts. The advantages of this device do not only arise from the space-saving design with a thickness of the device layer of 50μm. But it also benefits from low power consumption. In fact, its transmission state is stable in an open circuit, phrased memory effect. Only changes of the absorption require a voltage of up to 2 V. In contrast to mechanical iris apertures the absorption may be controlled on an analog scale offering the opportunity for apodization. These properties make our device the ideal candidate for battery powered and space-saving systems. We present optical measurements concerning control of the transmitted intensity and depth of focus, and studies dealing with switching times, light scattering, and stability. While the EC polymer used in this study still has limitations concerning color and contrast, the presented device features all functions of an iris aperture. In contrast to conventional devices it offers some special features. Owing to the variable chemistry of the EC material, its spectral response may be adjusted to certain applications like color filtering in different spectral regimes (UV, optical range, infrared). Furthermore, all segments may be switched individually to establish functions like spatial Fourier filtering or lateral tunable intensity filters.

  9. Aperture excited dielectric antennas

    NASA Technical Reports Server (NTRS)

    Crosswell, W. F.; Chatterjee, J. S.; Mason, V. B.; Tai, C. T.

    1974-01-01

    The results of a comprehensive experimental and theoretical study of the effect of placing dielectric objects over the aperture of waveguide antennas are presented. Experimental measurements of the radiation patterns, gain, impedance, near-field amplitude, and pattern and impedance coupling between pairs of antennas are given for various Plexiglas shapes, including the sphere and the cube, excited by rectangular, circular, and square waveguide feed apertures. The waveguide excitation of a dielectric sphere is modeled using the Huygens' source, and expressions for the resulting electric fields, directivity, and efficiency are derived. Calculations using this model show good overall agreement with experimental patterns and directivity measurements. The waveguide under an infinite dielectric slab is used as an impedance model. Calculations using this model agree qualitatively with the measured impedance data. It is concluded that dielectric loaded antennas such as the waveguide excited sphere, cube, or sphere-cylinder can produce directivities in excess of that obtained by a uniformly illuminated aperture of the same cross section, particularly for dielectric objects with dimensions of 2 wavelengths or less. It is also shown that for certain configurations coupling between two antennas of this type is less than that for the same antennas without dielectric loading.

  10. Novel large aperture EBCCD

    NASA Astrophysics Data System (ADS)

    Suzuki, Atsumu; Aoki, Shigeki; Haba, Junji; Sakuda, Makoto; Suyama, Motohiro

    2011-02-01

    A novel large aperture electron bombardment charge coupled device (EBCCD) has been developed. The diameter of its photocathode is 10 cm and it is the first EBCCD with such a large aperture. Its gain shows good linearity as a function of applied voltage up to -12 kV, where the gain is 2400. The spatial resolution was measured using ladder pattern charts. It is better than 2 line pairs/mm, which corresponds to 3.5 times the CCD pixel size. The spatial resolution was also measured with a copper foil pattern on a fluorescent screen irradiated with X-rays (14 and 18 keV) and a 60 keV gamma-ray from an americium source. The result was consistent with the measurement using ladder pattern charts. The output signal as a function of input light intensity shows better linearity than that of image intensifier tubes (IIT) as expected. We could detect cosmic rays passing through a scintillating fiber block and a plastic scintillator as a demonstration for a practical use in particle physics experiments. This kind of large aperture EBCCD can, for example, be used as an image sensor for a detector with a large number of readout channels and is expected to be additionally applied to other physics experiments.

  11. Terahertz interferometric synthetic aperture tomography for confocal imaging systems.

    PubMed

    Heimbeck, M S; Marks, D L; Brady, D; Everitt, H O

    2012-04-15

    Terahertz (THz) interferometric synthetic aperture tomography (TISAT) for confocal imaging within extended objects is demonstrated by combining attributes of synthetic aperture radar and optical coherence tomography. Algorithms recently devised for interferometric synthetic aperture microscopy are adapted to account for the diffraction-and defocusing-induced spatially varying THz beam width characteristic of narrow depth of focus, high-resolution confocal imaging. A frequency-swept two-dimensional TISAT confocal imaging instrument rapidly achieves in-focus, diffraction-limited resolution over a depth 12 times larger than the instrument's depth of focus in a manner that may be easily extended to three dimensions and greater depths.

  12. An overview of the activities of the OECD/NEA Task Force on adapting computer codes in nuclear applications to parallel architectures

    SciTech Connect

    Kirk, B.L.; Sartori, E.

    1997-06-01

    Subsequent to the introduction of High Performance Computing in the developed countries, the Organization for Economic Cooperation and Development/Nuclear Energy Agency (OECD/NEA) created the Task Force on Adapting Computer Codes in Nuclear Applications to Parallel Architectures (under the guidance of the Nuclear Science Committee`s Working Party on Advanced Computing) to study the growth area in supercomputing and its applicability to the nuclear community`s computer codes. The result has been four years of investigation for the Task Force in different subject fields - deterministic and Monte Carlo radiation transport, computational mechanics and fluid dynamics, nuclear safety, atmospheric models and waste management.

  13. Dynamic metamaterial aperture for microwave imaging

    SciTech Connect

    Sleasman, Timothy; Imani, Mohammadreza F.; Gollub, Jonah N.; Smith, David R.

    2015-11-16

    We present a dynamic metamaterial aperture for use in computational imaging schemes at microwave frequencies. The aperture consists of an array of complementary, resonant metamaterial elements patterned into the upper conductor of a microstrip line. Each metamaterial element contains two diodes connected to an external control circuit such that the resonance of the metamaterial element can be damped by application of a bias voltage. Through applying different voltages to the control circuit, select subsets of the elements can be switched on to create unique radiation patterns that illuminate the scene. Spatial information of an imaging domain can thus be encoded onto this set of radiation patterns, or measurements, which can be processed to reconstruct the targets in the scene using compressive sensing algorithms. We discuss the design and operation of a metamaterial imaging system and demonstrate reconstructed images with a 10:1 compression ratio. Dynamic metamaterial apertures can potentially be of benefit in microwave or millimeter wave systems such as those used in security screening and through-wall imaging. In addition, feature-specific or adaptive imaging can be facilitated through the use of the dynamic aperture.

  14. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  15. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  16. Evaluation of damage-induced permeability using a three-dimensional Adaptive Continuum/Discontinuum Code (AC/DC)

    NASA Astrophysics Data System (ADS)

    Fabian, Dedecker; Peter, Cundall; Daniel, Billaux; Torsten, Groeger

    Digging a shaft or drift inside a rock mass is a common practice in civil engineering when a transportation way, such as a motorway, railway tunnel or storage shaft is to be built. In most cases, the consequences of the disturbance on the medium must be known in order to estimate the behaviour of the disturbed rock mass. Indeed, excavating part of the rock causes a new distribution of the stress field around the excavation that can lead to micro-cracking and even to the failure of some rock volume in the vicinity of the shaft. Consequently, the formed micro-cracks modify the mechanical and hydraulic properties of the rock. In this paper, we present an original method for the evaluation of damage-induced permeability. ITASCA has developed and used discontinuum models to study rock damage by building particle assemblies and checking the breakage of bonds under stress. However, such models are limited in size by the very large number of particles needed to model even a comparatively small volume of rock. In fact, a large part of most models never experiences large strains and does not require the accurate description of large-strain/damage/post-peak behaviour afforded by a discontinuum model. Thus, a large model frequently can be separated into a strongly strained “core” area to be represented by a Discontinuum and a peripheral area for which continuum zones would be adequate. Based on this observation, Itasca has developed a coupled, three-dimensional, continuum/discontinuum modelling approach. The new approach, termed Adaptive Continuum/Discontinuum Code (AC/DC), is based on the use of a periodic discontinuum “base brick” for which more or less simplified continuum equivalents are derived. Depending on the level of deformation in each part of the model, the AC/DC code can dynamically select the appropriate brick type to be used. In this paper, we apply the new approach to an excavation performed in the Bure site, at which the French nuclear waste agency

  17. Differential Optical Synthetic Aperture Radar

    DOEpatents

    Stappaerts, Eddy A.

    2005-04-12

    A new differential technique for forming optical images using a synthetic aperture is introduced. This differential technique utilizes a single aperture to obtain unique (N) phases that can be processed to produce a synthetic aperture image at points along a trajectory. This is accomplished by dividing the aperture into two equal "subapertures", each having a width that is less than the actual aperture, along the direction of flight. As the platform flies along a given trajectory, a source illuminates objects and the two subapertures are configured to collect return signals. The techniques of the invention is designed to cancel common-mode errors, trajectory deviations from a straight line, and laser phase noise to provide the set of resultant (N) phases that can produce an image having a spatial resolution corresponding to a synthetic aperture.

  18. Memory-efficient table look-up optimized algorithm for context-based adaptive variable length decoding in H.264/advanced video coding

    NASA Astrophysics Data System (ADS)

    Wang, Jianhua; Cheng, Lianglun; Wang, Tao; Peng, Xiaodong

    2016-03-01

    Table look-up operation plays a very important role during the decoding processing of context-based adaptive variable length decoding (CAVLD) in H.264/advanced video coding (AVC). However, frequent table look-up operation can result in big table memory access, and then lead to high table power consumption. Aiming to solve the problem of big table memory access of current methods, and then reduce high power consumption, a memory-efficient table look-up optimized algorithm is presented for CAVLD. The contribution of this paper lies that index search technology is introduced to reduce big memory access for table look-up, and then reduce high table power consumption. Specifically, in our schemes, we use index search technology to reduce memory access by reducing the searching and matching operations for code_word on the basis of taking advantage of the internal relationship among length of zero in code_prefix, value of code_suffix and code_lengh, thus saving the power consumption of table look-up. The experimental results show that our proposed table look-up algorithm based on index search can lower about 60% memory access consumption compared with table look-up by sequential search scheme, and then save much power consumption for CAVLD in H.264/AVC.

  19. Compounding in synthetic aperture imaging.

    PubMed

    Hansen, Jens Munk; Jensen, Jørgen Arendt

    2012-09-01

    A method for obtaining compound images using synthetic aperture data is investigated using a convex array transducer. The new approach allows spatial compounding to be performed for any number of angles without reducing the frame rate or temporal resolution. This important feature is an intrinsic property of how the compound images are constructed using synthetic aperture data and an improvement compared with how spatial compounding is obtained using conventional methods. The synthetic aperture compound images are created by exploiting the linearity of delay-and-sum beamformation for data collected from multiple spherical emissions to synthesize multiple transmit and receive apertures, corresponding to imaging the tissue from multiple directions. The many images are added incoherently, to produce a single compound image. Using a 192-element, 3.5-MHz, λ-pitch transducer, it is demonstrated from tissue-phantom measurements that the speckle is reduced and the contrast resolution improved when applying synthetic aperture compound imaging. At a depth of 4 cm, the size of the synthesized apertures is optimized for lesion detection based on the speckle information density. This is a performance measure for tissue contrast resolution which quantifies the tradeoff between resolution loss and speckle reduction. The speckle information density is improved by 25% when comparing synthetic aperture compounding to a similar setup for compounding using dynamic receive focusing. The cystic resolution and clutter levels are measured using a wire phantom setup and compared with conventional application of the array, as well as to synthetic aperture imaging without compounding. If the full aperture is used for synthetic aperture compounding, the cystic resolution is improved by 41% compared with conventional imaging, and is at least as good as what can be obtained using synthetic aperture imaging without compounding. PMID:23007781

  20. Reading the Second Code: Mapping Epigenomes to Understand Plant Growth, Development, and Adaptation to the Environment[OA

    PubMed Central

    2012-01-01

    We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210

  1. Performance Analysis of MIMO-STBC Systems with Higher Coding Rate Using Adaptive Semiblind Channel Estimation Scheme

    PubMed Central

    Kumar, Ravi

    2014-01-01

    Semiblind channel estimation method provides the best trade-off in terms of bandwidth overhead, computational complexity and latency. The result after using multiple input multiple output (MIMO) systems shows higher data rate and longer transmit range without any requirement for additional bandwidth or transmit power. This paper presents the detailed analysis of diversity coding techniques using MIMO antenna systems. Different space time block codes (STBCs) schemes have been explored and analyzed with the proposed higher code rate. STBCs with higher code rates have been simulated for different modulation schemes using MATLAB environment and the simulated results have been compared in the semiblind environment which shows the improvement even in highly correlated antenna arrays and is found very close to the condition when channel state information (CSI) is known to the channel. PMID:24688379

  2. Passive adaptive imaging through turbulence

    NASA Astrophysics Data System (ADS)

    Tofsted, David

    2016-05-01

    Standard methods for improved imaging system performance under degrading optical turbulence conditions typically involve active adaptive techniques or post-capture image processing. Here, passive adaptive methods are considered where active sources are disallowed, a priori. Theoretical analyses of short-exposure turbulence impacts indicate that varying aperture sizes experience different degrees of turbulence impacts. Smaller apertures often outperform larger aperture systems as turbulence strength increases. This suggests a controllable aperture system is advantageous. In addition, sub-aperture sampling of a set of training images permits the system to sense tilts in different sub-aperture regions through image acquisition and image cross-correlation calculations. A four sub-aperture pattern supports corrections involving five realizable operating modes (beyond tip and tilt) for removing aberrations over an annular pattern. Progress to date will be discussed regarding development and field trials of a prototype system.

  3. Optica aperture synthesis

    NASA Astrophysics Data System (ADS)

    van der Avoort, Casper

    2006-05-01

    Optical long baseline stellar interferometry is an observational technique in astronomy that already exists for over a century, but is truly blooming during the last decades. The undoubted value of stellar interferometry as a technique to measure stellar parameters beyond the classical resolution limit is more and more spreading to the regime of synthesis imaging. With optical aperture synthesis imaging, the measurement of parameters is extended to the reconstruction of high resolution stellar images. A number of optical telescope arrays for synthesis imaging are operational on Earth, while space-based telescope arrays are being designed. For all imaging arrays, the combination of the light collected by the telescopes in the array can be performed in a number of ways. In this thesis, methods are introduced to model these methods of beam combination and compare their effectiveness in the generation of data to be used to reconstruct the image of a stellar object. One of these methods of beam combination is to be applied in a future space telescope. The European Space Agency is developing a mission that can valuably be extended with an imaging beam combiner. This mission is labeled Darwin, as its main goal is to provide information on the origin of life. The primary objective is the detection of planets around nearby stars - called exoplanets- and more precisely, Earth-like exoplanets. This detection is based on a signal, rather than an image. With an imaging mode, designed as described in this thesis, Darwin can make images of, for example, the planetary system to which the detected exoplanet belongs or, as another example, of the dust disk around a star out of which planets form. Such images will greatly contribute to the understanding of the formation of our own planetary system and of how and when life became possible on Earth. The comparison of beam combination methods for interferometric imaging occupies most of the pages of this thesis. Additional chapters will

  4. HASEonGPU-An adaptive, load-balanced MPI/GPU-code for calculating the amplified spontaneous emission in high power laser media

    NASA Astrophysics Data System (ADS)

    Eckert, C. H. J.; Zenker, E.; Bussmann, M.; Albach, D.

    2016-10-01

    We present an adaptive Monte Carlo algorithm for computing the amplified spontaneous emission (ASE) flux in laser gain media pumped by pulsed lasers. With the design of high power lasers in mind, which require large size gain media, we have developed the open source code HASEonGPU that is capable of utilizing multiple graphic processing units (GPUs). With HASEonGPU, time to solution is reduced to minutes on a medium size GPU cluster of 64 NVIDIA Tesla K20m GPUs and excellent speedup is achieved when scaling to multiple GPUs. Comparison of simulation results to measurements of ASE in Y b 3 + : Y AG ceramics show perfect agreement.

  5. Performance of a Block Structured, Hierarchical Adaptive MeshRefinement Code on the 64k Node IBM BlueGene/L Computer

    SciTech Connect

    Greenough, Jeffrey A.; de Supinski, Bronis R.; Yates, Robert K.; Rendleman, Charles A.; Skinner, David; Beckner, Vince; Lijewski, Mike; Bell, John; Sexton, James C.

    2005-04-25

    We describe the performance of the block-structured Adaptive Mesh Refinement (AMR) code Raptor on the 32k node IBM BlueGene/L computer. This machine represents a significant step forward towards petascale computing. As such, it presents Raptor with many challenges for utilizing the hardware efficiently. In terms of performance, Raptor shows excellent weak and strong scaling when running in single level mode (no adaptivity). Hardware performance monitors show Raptor achieves an aggregate performance of 3:0 Tflops in the main integration kernel on the 32k system. Results from preliminary AMR runs on a prototype astrophysical problem demonstrate the efficiency of the current software when running at large scale. The BG/L system is enabling a physics problem to be considered that represents a factor of 64 increase in overall size compared to the largest ones of this type computed to date. Finally, we provide a description of the development work currently underway to address our inefficiencies.

  6. Sparse aperture endoscope

    DOEpatents

    Fitch, Joseph P.

    1999-07-06

    An endoscope which reduces the volume needed by the imaging part thereof, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases the utility thereof. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing.

  7. Sparse aperture endoscope

    DOEpatents

    Fitch, J.P.

    1999-07-06

    An endoscope is disclosed which reduces the volume needed by the imaging part, maintains resolution of a wide diameter optical system, while increasing tool access, and allows stereographic or interferometric processing for depth and perspective information/visualization. Because the endoscope decreases the volume consumed by imaging optics such allows a larger fraction of the volume to be used for non-imaging tools, which allows smaller incisions in surgical and diagnostic medical applications thus produces less trauma to the patient or allows access to smaller volumes than is possible with larger instruments. The endoscope utilizes fiber optic light pipes in an outer layer for illumination, a multi-pupil imaging system in an inner annulus, and an access channel for other tools in the center. The endoscope is amenable to implementation as a flexible scope, and thus increases it's utility. Because the endoscope uses a multi-aperture pupil, it can also be utilized as an optical array, allowing stereographic and interferometric processing. 7 figs.

  8. DAVINCI: Dilute Aperture VIsible Nulling Coronagraphic Imager

    NASA Technical Reports Server (NTRS)

    Shao, Michael; Levine, B. M.; Vasisht, G.; Lane, B. F.; Woodruff, R.; Vasudevan, G.; Samuele, R.; Lloyd, C. A.; Clampin, M.; Lyon, R.; Guyon, O.

    2008-01-01

    This slide presentation gives an overview of DAVINCI (Dilute Aperture VIsible Nulling Coronagraphic Imager). The presentation also includes information about dilute aperture coronagraph, and lyot efficiency.

  9. UAVSAR Phased Array Aperture

    NASA Technical Reports Server (NTRS)

    Chamberlain, Neil; Zawadzki, Mark; Sadowy, Greg; Oakes, Eric; Brown, Kyle; Hodges, Richard

    2009-01-01

    This paper describes the development of a patch antenna array for an L-band repeat-pass interferometric synthetic aperture radar (InSAR) instrument that is to be flown on an unmanned aerial vehicle (UAV). The antenna operates at a center frequency of 1.2575 GHz and with a bandwidth of 80 MHz, consistent with a number of radar instruments that JPL has previously flown. The antenna is designed to radiate orthogonal linear polarizations in order to facilitate fully-polarimetric measurements. Beam-pointing requirements for repeat-pass SAR interferometry necessitate electronic scanning in azimuth over a range of -20degrees in order to compensate for aircraft yaw. Beam-steering is accomplished by transmit/receive (T/R) modules and a beamforming network implemented in a stripline circuit board. This paper, while providing an overview of phased array architecture, focuses on the electromagnetic design of the antenna tiles and associated interconnects. An important aspect of the design of this antenna is that it has an amplitude taper of 10dB in the elevation direction. This is to reduce multipath reflections from the wing that would otherwise be detrimental to interferometric radar measurements. This taper is provided by coupling networks in the interconnect circuits as opposed to attenuating the output of the T/R modules. Details are given of material choices and fabrication techniques that meet the demanding environmental conditions that the antenna must operate in. Predicted array performance is reported in terms of co-polarized and crosspolarized far-field antenna patterns, and also in terms of active reflection coefficient.

  10. Verification of the CENTRM Module for Adaptation of the SCALE Code to NGNP Prismatic and PBR Core Designs

    SciTech Connect

    Ganapol, Barry; Maldonado, Ivan

    2014-01-23

    The generation of multigroup cross sections lies at the heart of the very high temperature reactor (VHTR) core design, whether the prismatic (block) or pebble-bed type. The design process, generally performed in three steps, is quite involved and its execution is crucial to proper reactor physics analyses. The primary purpose of this project is to develop the CENTRM cross-section processing module of the SCALE code package for application to prismatic or pebble-bed core designs. The team will include a detailed outline of the entire processing procedure for application of CENTRM in a final report complete with demonstration. In addition, they will conduct a thorough verification of the CENTRM code, which has yet to be performed. The tasks for this project are to: Thoroughly test the panel algorithm for neutron slowing down; Develop the panel algorithm for multi-materials; Establish a multigroup convergence 1D transport acceleration algorithm in the panel formalism; Verify CENTRM in 1D plane geometry; Create and test the corresponding transport/panel algorithm in spherical and cylindrical geometries; and, Apply the verified CENTRM code to current VHTR core design configurations for an infinite lattice, including assessing effectiveness of Dancoff corrections to simulate TRISO particle heterogeneity.

  11. Superresolution and Synthetic Aperture Radar

    SciTech Connect

    DICKEY,FRED M.; ROMERO,LOUIS; DOERRY,ARMIN W.

    2001-05-01

    Superresolution concepts offer the potential of resolution beyond the classical limit. This great promise has not generally been realized. In this study we investigate the potential application of superresolution concepts to synthetic aperture radar. The analytical basis for superresolution theory is discussed. The application of the concept to synthetic aperture radar is investigated as an operator inversion problem. Generally, the operator inversion problem is ill posed. A criterion for judging superresolution processing of an image is presented.

  12. Modeling for deformable mirrors and the adaptive optics optimization program

    SciTech Connect

    Henesian, M.A.; Haney, S.W.; Trenholme, J.B.; Thomas, M.

    1997-03-18

    We discuss aspects of adaptive optics optimization for large fusion laser systems such as the 192-arm National Ignition Facility (NIF) at LLNL. By way of example, we considered the discrete actuator deformable mirror and Hartmann sensor system used on the Beamlet laser. Beamlet is a single-aperture prototype of the 11-0-5 slab amplifier design for NIF, and so we expect similar optical distortion levels and deformable mirror correction requirements. We are now in the process of developing a numerically efficient object oriented C++ language implementation of our adaptive optics and wavefront sensor code, but this code is not yet operational. Results are based instead on the prototype algorithms, coded-up in an interpreted array processing computer language.

  13. Fabrication of the pinhole aperture for AdaptiSPECT

    PubMed Central

    Kovalsky, Stephen; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical pinhole SPECT imaging system under final construction at the Center for Gamma-Ray Imaging. The system is designed to be able to autonomously change its imaging configuration. The system comprises 16 detectors mounted on translational stages to move radially away and towards the center of the field-of-view. The system also possesses an adaptive pinhole aperture with multiple collimator diameters and pinhole sizes, as well as the possibility to switch between multiplexed and non-multiplexed imaging configurations. In this paper, we describe the fabrication of the AdaptiSPECT pinhole aperture and its controllers. PMID:26146443

  14. Computational study of ion beam extraction phenomena through multiple apertures

    SciTech Connect

    Hu, Wanpeng; Sang, Chaofeng; Tang, Tengfei; Wang, Dezhen; Li, Ming; Jin, Dazhi; Tan, Xiaohua

    2014-03-15

    The process of ion extraction through multiple apertures is investigated using a two-dimensional particle-in-cell code. We consider apertures with a fixed diameter with a hydrogen plasma background, and the trajectories of electrons, H{sup +} and H{sub 2}{sup +} ions in the self-consistently calculated electric field are traced. The focus of this work is the fundamental physics of the ion extraction, and not particular to a specific device. The computed convergence and divergence of the extracted ion beam are analyzed. We find that the extracted ion flux reaching the extraction electrode is non-uniform, and the peak flux positions change according to operational parameters, and do not necessarily match the positions of the apertures in the y-direction. The profile of the ion flux reaching the electrode is mainly affected by the bias voltage and the distance between grid wall and extraction electrode.

  15. FESDIF -- Finite Element Scalar Diffraction theory code

    SciTech Connect

    Kraus, H.G.

    1992-09-01

    This document describes the theory and use of a powerful scalar diffraction theory based computer code for calculation of intensity fields due to diffraction of optical waves by two-dimensional planar apertures and lenses. This code is called FESDIF (Finite Element Scalar Diffraction). It is based upon both Fraunhofer and Kirchhoff scalar diffraction theories. Simplified routines for circular apertures are included. However, the real power of the code comes from its basis in finite element methods. These methods allow the diffracting aperture to be virtually any geometric shape, including the various secondary aperture obstructions present in telescope systems. Aperture functions, with virtually any phase and amplitude variations, are allowed in the aperture openings. Step change aperture functions are accommodated. The incident waves are considered to be monochromatic. Plane waves, spherical waves, or Gaussian laser beams may be incident upon the apertures. Both area and line integral transformations were developed for the finite element based diffraction transformations. There is some loss of aperture function generality in the line integral transformations which are typically many times more computationally efficient than the area integral transformations when applicable to a particular problem.

  16. Extraordinary optical transmission through patterned subwavelength apertures.

    SciTech Connect

    Kemme, Shanalyn A.; El-Kady, Ihab Fathy; Hadley, G. Ronald; Peters, David William; Lanes, Chris E.

    2004-12-01

    Light propagating through a subwavelength aperture can be dramatically increased by etching a grating in the metal around the hole. Moreover, light that would typically broadly diverge when passing through an unpatterned subwavelength hole can be directed into a narrow beam by utilizing a specific pattern around the aperture. While the increased transmission and narrowed angular emission appear to defy far-field diffraction theory, they are consistent with a fortuitous plasmon/photon coupling. In addition, the coupling between photons and surface plasmons affects the emissivity of a surface comprised of such structures. These properties are useful across several strategic areas of interest to Sandia. A controllable emission spectrum could benefit satellite and military application areas. Photolithography and near-field microscopy are natural applications for a system that controls light beyond the diffraction limit in a manner that is easily parallelizable. Over the one year of this LDRD, we have built or modified the numerical tools necessary to model such structures. These numerical codes and the knowledge base for using them appropriately will be available in the future for modeling work on surface plasmons or other optical modeling at Sandia. Using these tools, we have designed and optimized structures for various transmission or emission properties. We demonstrate the ability to design a metallic skin with an emissivity peak at a pre-determined wavelength in the spectrum. We optimize structures for maximum light transmission and show transmitted beams that beat the far-field diffraction limit.

  17. Real-time implementation of a speech digitization algorithm combining time-domain harmonic scaling and adaptive residual coding, volume 2

    NASA Astrophysics Data System (ADS)

    Melsa, J. L.; Mills, J. D.; Arora, A. A.

    1983-06-01

    This report describes the results of a fifteen month study of the real-time implementation of an algorithm combining time-domain harmonic scaling and Adaptive Residual Coding at a transmission bit rate of 16 kb/s. The modifications of this encoding algorithm as originally presented by Melso and Pande to allow real-time implementation are described in detail. A non real-time FORTRAN simulation using a sixteen-bit word length was developed and tested to establish feasibility. The hardware implementation of a full-duplex, real-time system has demonstrated that this algorithm is capable of producing toll quality speech digitization. This report has been divided into two volumes. The second volume discusses details of the hardware implementation, schematics for the system and operating instructions.

  18. Real-time implementation of a speech digitization algorithm combining time-domain harmonic scaling and adaptive residual coding, volume 1

    NASA Astrophysics Data System (ADS)

    Melsa, J. L.; Mills, J. D.; Arora, A. A.

    1983-06-01

    This report describes the results of a fifteen-month study of the real-time implementation of algorithm combining time-domain harmonic scaling and Adaptive Residual Coding at a transmission bit rate of 16 kb/s. The modifications of this encoding algorithm as originally presented by Melsa and Pande to allow real-time implementation are described in detail. A non real-time FORTRAN simulation using a sixteen-bit word length was developed and tested to establish feasibility. The hardware implementation of a full-duplex, real-time system has demonstrated that this algorithm is capable of producing toll quality speech digitization. This report has been divided into two volumes. The first volume discusses the algorithm modifications and FORTRAN simulation. The details of the hardware implementation, schematics for the system and operating instructions are included in Volume 2 of this final report.

  19. Automation and adaptation: Nurses' problem-solving behavior following the implementation of bar coded medication administration technology.

    PubMed

    Holden, Richard J; Rivera-Rodriguez, A Joy; Faye, Héléne; Scanlon, Matthew C; Karsh, Ben-Tzion

    2013-08-01

    The most common change facing nurses today is new technology, particularly bar coded medication administration technology (BCMA). However, there is a dearth of knowledge on how BCMA alters nursing work. This study investigated how BCMA technology affected nursing work, particularly nurses' operational problem-solving behavior. Cognitive systems engineering observations and interviews were conducted after the implementation of BCMA in three nursing units of a freestanding pediatric hospital. Problem-solving behavior, associated problems, and goals, were specifically defined and extracted from observed episodes of care. Three broad themes regarding BCMA's impact on problem solving were identified. First, BCMA allowed nurses to invent new problem-solving behavior to deal with pre-existing problems. Second, BCMA made it difficult or impossible to apply some problem-solving behaviors that were commonly used pre-BCMA, often requiring nurses to use potentially risky workarounds to achieve their goals. Third, BCMA created new problems that nurses were either able to solve using familiar or novel problem-solving behaviors, or unable to solve effectively. Results from this study shed light on hidden hazards and suggest three critical design needs: (1) ecologically valid design; (2) anticipatory control; and (3) basic usability. Principled studies of the actual nature of clinicians' work, including problem solving, are necessary to uncover hidden hazards and to inform health information technology design and redesign.

  20. Dynamic aperture studies during collisions in the LHC

    SciTech Connect

    Chou, W., Ritson, D.

    1997-06-01

    The dynamic aperture during collisions in the LHC is mainly determined by the beam-beam interactions and by multipole errors of the high gradient quadrupoles in the interaction regions. The computer code JJIP has been modified to accommodate the LHC lattice configuration and parameters and is employed in this study. Simulations over a range of machine parameters are carried out, and results of preliminary investigation are presented.

  1. SEASAT Synthetic Aperture Radar Data

    NASA Technical Reports Server (NTRS)

    Henderson, F. M.

    1981-01-01

    The potential of radar imagery from space altitudes is discussed and the advantages of radar over passive sensor systems are outlined. Specific reference is made to the SEASAT synthetic aperture radar. Possible applications include oil spill monitoring, snow and ice reconnaissance, mineral exploration, and monitoring phenomena in the urban environment.

  2. Large aperture diffractive space telescope

    DOEpatents

    Hyde, Roderick A.

    2001-01-01

    A large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary objective lens functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass "aiming" at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The objective lens includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the objective lens, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets which may be either earth bound or celestial.

  3. Alternative aperture stop position designs for SIRTF

    NASA Technical Reports Server (NTRS)

    Davis, Paul K.; Dinger, Ann S.

    1990-01-01

    Three designs of the Space Infrared Telescope Facility (SIRTF) for a 100,000 high earth orbit are considered with particular attention given to the evaluation of the aperture stop position. The choice of aperture stop position will be based on stray light considerations which are being studied concurrently. It is noted that there are advantages in cost, mass, and astronomical aperture to placing the aperture stop at or near the primary mirror, if the stray light circumstances allow.

  4. Large aperture scanning airborne lidar

    NASA Technical Reports Server (NTRS)

    Smith, J.; Bindschadler, R.; Boers, R.; Bufton, J. L.; Clem, D.; Garvin, J.; Melfi, S. H.

    1988-01-01

    A large aperture scanning airborne lidar facility is being developed to provide important new capabilities for airborne lidar sensor systems. The proposed scanning mechanism allows for a large aperture telescope (25 in. diameter) in front of an elliptical flat (25 x 36 in.) turning mirror positioned at a 45 degree angle with respect to the telescope optical axis. The lidar scanning capability will provide opportunities for acquiring new data sets for atmospheric, earth resources, and oceans communities. This completed facility will also make available the opportunity to acquire simulated EOS lidar data on a near global basis. The design and construction of this unique scanning mechanism presents exciting technological challenges of maintaining the turning mirror optical flatness during scanning while exposed to extreme temperatures, ambient pressures, aircraft vibrations, etc.

  5. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need

  6. TranAir: A full-potential, solution-adaptive, rectangular grid code for predicting subsonic, transonic, and supersonic flows about arbitrary configurations. Theory document

    NASA Technical Reports Server (NTRS)

    Johnson, F. T.; Samant, S. S.; Bieterman, M. B.; Melvin, R. G.; Young, D. P.; Bussoletti, J. E.; Hilmes, C. L.

    1992-01-01

    A new computer program, called TranAir, for analyzing complex configurations in transonic flow (with subsonic or supersonic freestream) was developed. This program provides accurate and efficient simulations of nonlinear aerodynamic flows about arbitrary geometries with the ease and flexibility of a typical panel method program. The numerical method implemented in TranAir is described. The method solves the full potential equation subject to a set of general boundary conditions and can handle regions with differing total pressure and temperature. The boundary value problem is discretized using the finite element method on a locally refined rectangular grid. The grid is automatically constructed by the code and is superimposed on the boundary described by networks of panels; thus no surface fitted grid generation is required. The nonlinear discrete system arising from the finite element method is solved using a preconditioned Krylov subspace method embedded in an inexact Newton method. The solution is obtained on a sequence of successively refined grids which are either constructed adaptively based on estimated solution errors or are predetermined based on user inputs. Many results obtained by using TranAir to analyze aerodynamic configurations are presented.

  7. Errors associated with the use of adaptive differential pulse code modulation in the compression of isometric and dynamic myo-electric signals.

    PubMed

    Chan, A D; Lovely, D F; Hudgins, B

    1998-03-01

    Muscle activity produces an electrical signal termed the myo-electric signal (MES). The MES is a useful clinical tool, used in diagnostics and rehabilitation. This signal is typically stored in 2 bytes as 12-bit data, sampled at 3 kHz, resulting in a 6 kbyte s-1 storage requirement. Processing MES data requires large bit manipulations and heavy memory storage requirements. Adaptive differential pulse code modulation (ADPCM) is a popular and successful compression technique for speech. Its application to MES would reduce 12-bit data to a 4-bit representation, providing a 3:1 compression. As, in most practical applications, memory is organised in bytes, the realisable compression is 4:1, as pairs of data can be stored in a single byte. The performance of the ADPCM compression technique, using a real-time system at 1 kHz, 2 kHz and 4 kHz sampling rates, is evaluated. The data used include MES from both isometric and dynamic contractions. The percent residual difference (PRD) between an unprocessed and processed MES is used as a performance measure. Errors in computed parameters, such as median frequency and variance, which are used in clinical diagnostics, and waveform features employed in prosthetic control are also used to evaluate the system. The results of the study demonstrate that the ADPCM compression technique is an excellent solution for relieving the data storage requirements of MES both in isometric and dynamic situations. PMID:9684462

  8. Controlled-aperture wave-equation migration

    SciTech Connect

    Huang, L.; Fehler, Michael C.; Sun, H.; Li, Z.

    2003-01-01

    We present a controlled-aperture wave-equation migration method that no1 only can reduce migration artiracts due to limited recording aperlurcs and determine image weights to balance the efl'ects of limited-aperture illumination, but also can improve thc migration accuracy by reducing the slowness perturbations within thc controlled migration regions. The method consists of two steps: migration aperture scan and controlled-aperture migration. Migration apertures for a sparse distribution of shots arc determined using wave-equation migration, and those for the other shots are obtained by interpolation. During the final controlled-aperture niigration step, we can select a reference slowness in c;ontrollecl regions of the slowness model to reduce slowncss perturbations, and consequently increase the accuracy of wave-equation migration inel hods that makc use of reference slownesses. In addition, the computation in the space domain during wavefield downward continuation is needed to be conducted only within the controlled apertures and therefore, the computational cost of controlled-aperture migration step (without including migration aperture scan) is less than the corresponding uncontrolled-aperture migration. Finally, we can use the efficient split-step Fourier approach for migration-aperture scan, then use other, more accurate though more expensive, wave-equation migration methods to perform thc final controlled-apertio.ee migration to produce the most accurate image.

  9. 3D synthetic aperture for controlled-source electromagnetics

    NASA Astrophysics Data System (ADS)

    Knaak, Allison

    Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets

  10. Aperture scanning Fourier ptychographic microscopy

    PubMed Central

    Ou, Xiaoze; Chung, Jaebum; Horstmeyer, Roarke; Yang, Changhuei

    2016-01-01

    Fourier ptychographic microscopy (FPM) is implemented through aperture scanning by an LCOS spatial light modulator at the back focal plane of the objective lens. This FPM configuration enables the capturing of the complex scattered field for a 3D sample both in the transmissive mode and the reflective mode. We further show that by combining with the compressive sensing theory, the reconstructed 2D complex scattered field can be used to recover the 3D sample scattering density. This implementation expands the scope of application for FPM and can be beneficial for areas such as tissue imaging and wafer inspection. PMID:27570705

  11. Dual aperture multispectral Schmidt objective

    NASA Technical Reports Server (NTRS)

    Minott, P. O. (Inventor)

    1984-01-01

    A dual aperture, off-axis catadioptic Schmidt objective is described. It is formed by symmetrically aligning two pairs of Schmidt objectives on opposite sides of a common plane (x,z). Each objective has a spherical primary mirror with a spherical focal plane and center of curvature aligned along an optic axis laterally spaced apart from the common plane. A multiprism beamsplitter with buried dichroic layers and a convex entrance and concave exit surfaces optically concentric to the center of curvature may be positioned at the focal plane. The primary mirrors of each objective may be connected rigidly together and may have equal or unequal focal lengths.

  12. the Large Aperture GRB Observatory

    SciTech Connect

    Bertou, Xavier

    2009-04-30

    The Large Aperture GRB Observatory (LAGO) aims at the detection of high energy photons from Gamma Ray Bursts (GRB) using the single particle technique (SPT) in ground based water Cherenkov detectors (WCD). To reach a reasonable sensitivity, high altitude mountain sites have been selected in Mexico (Sierra Negra, 4550 m a.s.l.), Bolivia (Chacaltaya, 5300 m a.s.l.) and Venezuela (Merida, 4765 m a.s.l.). We report on the project progresses and the first operation at high altitude, search for bursts in 6 months of preliminary data, as well as search for signal at ground level when satellites report a burst.

  13. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  14. Compressible Astrophysics Simulation Code

    SciTech Connect

    Howell, L.; Singer, M.

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  15. Evaluation of total effective dose due to certain environmentally placed naturally occurring radioactive materials using a procedural adaptation of RESRAD code.

    PubMed

    Beauvais, Z S; Thompson, K H; Kearfott, K J

    2009-07-01

    Due to a recent upward trend in the price of uranium and subsequent increased interest in uranium mining, accurate modeling of baseline dose from environmental sources of radioactivity is of increasing interest. Residual radioactivity model and code (RESRAD) is a program used to model environmental movement and calculate the dose due to the inhalation, ingestion, and exposure to radioactive materials following a placement. This paper presents a novel use of RESRAD for the calculation of dose from non-enhanced, or ancient, naturally occurring radioactive material (NORM). In order to use RESRAD to calculate the total effective dose (TED) due to ancient NORM, a procedural adaptation was developed to negate the effects of time progressive distribution of radioactive materials. A dose due to United States' average concentrations of uranium, actinium, and thorium series radionuclides was then calculated. For adults exposed in a residential setting and assumed to eat significant amounts of food grown in NORM concentrated areas, the annual dose due to national average NORM concentrations was 0.935 mSv y(-1). A set of environmental dose factors were calculated for simple estimation of dose from uranium, thorium, and actinium series radionuclides for various age groups and exposure scenarios as a function of elemental uranium and thorium activity concentrations in groundwater and soil. The values of these factors for uranium were lowest for an adult exposed in an industrial setting: 0.00476 microSv kg Bq(-1) y(-1) for soil and 0.00596 microSv m(3) Bq(-1) y(-1) for water (assuming a 1:1 234U:238U activity ratio in water). The uranium factors were highest for infants exposed in a residential setting and assumed to ingest food grown onsite: 34.8 microSv kg Bq(-1) y(-1) in soil and 13.0 microSv m(3) Bq(-1) y(-1) in water. PMID:19509509

  16. An adaptive-in-temperature method for on-the-fly sampling of thermal neutron scattering data in continuous-energy Monte Carlo codes

    NASA Astrophysics Data System (ADS)

    Pavlou, Andrew Theodore

    The Monte Carlo simulation of full-core neutron transport requires high fidelity data to represent not only the various types of possible interactions that can occur, but also the temperature and energy regimes for which these data are relevant. For isothermal conditions, nuclear cross section data are processed in advance of running a simulation. In reality, the temperatures in a neutronics simulation are not fixed, but change with respect to the temperatures computed from an associated heat transfer or thermal hydraulic (TH) code. To account for the temperature change, a code user must either 1) compute new data at the problem temperature inline during the Monte Carlo simulation or 2) pre-compute data at a variety of temperatures over the range of possible values. Inline data processing is computationally inefficient while pre-computing data at many temperatures can be memory expensive. An alternative on-the-fly approach to handle the temperature component of nuclear data is desired. By on-the-fly we mean a procedure that adjusts cross section data to the correct temperature adaptively during the Monte Carlo random walk instead of before the running of a simulation. The on-the-fly procedure should also preserve simulation runtime efficiency. While on-the-fly methods have recently been developed for higher energy regimes, the double differential scattering of thermal neutrons has not been examined in detail until now. In this dissertation, an on-the-fly sampling method is developed by investigating the temperature dependence of the thermal double differential scattering distributions. The temperature dependence is analyzed with a linear least squares regression test to develop fit coefficients that are used to sample thermal scattering data at any temperature. The amount of pre-stored thermal scattering data has been drastically reduced from around 25 megabytes per temperature per nuclide to only a few megabytes per nuclide by eliminating the need to compute data

  17. Resonant Effects in Nanoscale Bowtie Apertures

    NASA Astrophysics Data System (ADS)

    Ding, Li; Qin, Jin; Guo, Songpo; Liu, Tao; Kinzel, Edward; Wang, Liang

    2016-06-01

    Nanoscale bowtie aperture antennas can be used to focus light well below the diffraction limit with extremely high transmission efficiencies. This paper studies the spectral dependence of the transmission through nanoscale bowtie apertures defined in a silver film. A realistic bowtie aperture is numerically modeled using the Finite Difference Time Domain (FDTD) method. Results show that the transmission spectrum is dominated by Fabry-Pérot (F-P) waveguide modes and plasmonic modes. The F-P resonance is sensitive to the thickness of the film and the plasmonic resonant mode is closely related to the gap distance of the bowtie aperture. Both characteristics significantly affect the transmission spectrum. To verify these numerical results, bowtie apertures are FIB milled in a silver film. Experimental transmission measurements agree with simulation data. Based on this result, nanoscale bowtie apertures can be optimized to realize deep sub-wavelength confinement with high transmission efficiency with applications to nanolithography, data storage, and bio-chemical sensing.

  18. Particle-in-Cell Modeling of Magnetized Argon Plasma Flow Through Small Mechanical Apertures

    SciTech Connect

    Adam B. Sefkow and Samuel A. Cohen

    2009-04-09

    Motivated by observations of supersonic argon-ion flow generated by linear helicon-heated plasma devices, a three-dimensional particle-in-cell (PIC) code is used to study whether stationary electrostatic layers form near mechanical apertures intersecting the flow of magnetized plasma. By self-consistently evaluating the temporal evolution of the plasma in the vicinity of the aperture, the PIC simulations characterize the roles of the imposed aperture and applied magnetic field on ion acceleration. The PIC model includes ionization of a background neutral-argon population by thermal and superthermal electrons, the latter found upstream of the aperture. Near the aperture, a transition from a collisional to a collisionless regime occurs. Perturbations of density and potential, with mm wavelengths and consistent with ion acoustic waves, propagate axially. An ion acceleration region of length ~ 200-300 λD,e forms at the location of the aperture and is found to be an electrostatic double layer, with axially-separated regions of net positive and negative charge. Reducing the aperture diameter or increasing its length increases the double layer strength.

  19. Coding for Electronic Mail

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  20. Ion mobility spectrometer with virtual aperture grid

    DOEpatents

    Pfeifer, Kent B.; Rumpf, Arthur N.

    2010-11-23

    An ion mobility spectrometer does not require a physical aperture grid to prevent premature ion detector response. The last electrodes adjacent to the ion collector (typically the last four or five) have an electrode pitch that is less than the width of the ion swarm and each of the adjacent electrodes is connected to a source of free charge, thereby providing a virtual aperture grid at the end of the drift region that shields the ion collector from the mirror current of the approaching ion swarm. The virtual aperture grid is less complex in assembly and function and is less sensitive to vibrations than the physical aperture grid.

  1. Simultaneous displacement and slope measurement in electronic speckle pattern interferometry using adjustable aperture multiplexing.

    PubMed

    Lu, Min; Wang, Shengjia; Aulbach, Laura; Koch, Alexander W

    2016-08-01

    This paper suggests the use of adjustable aperture multiplexing (AAM), a method which is able to introduce multiple tunable carrier frequencies into a three-beam electronic speckle pattern interferometer to measure the out-of-plane displacement and its first-order derivative simultaneously. In the optical arrangement, two single apertures are located in the object and reference light paths, respectively. In cooperation with two adjustable mirrors, virtual images of the single apertures construct three pairs of virtual double apertures with variable aperture opening sizes and aperture distances. By setting the aperture parameter properly, three tunable spatial carrier frequencies are produced within the speckle pattern and completely separate the information of three interferograms in the frequency domain. By applying the inverse Fourier transform to a selected spectrum, its corresponding phase difference distribution can thus be evaluated. Therefore, we can obtain the phase map due to the deformation as well as its slope of the test surface from two speckle patterns which are recorded at different loading events. By this means, simultaneous and dynamic measurements are realized. AAM has greatly simplified the measurement system, which contributes to improving the system stability and increasing the system flexibility and adaptability to various measurement requirements. This paper presents the AAM working principle, the phase retrieval using spatial carrier frequency, and preliminary experimental results. PMID:27505365

  2. Modal wavefront reconstruction over general shaped aperture by numerical orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Jingfei; Li, Xinhua; Gao, Zhishan; Wang, Shuai; Sun, Wenqing; Wang, Wei; Yuan, Qun

    2015-03-01

    In practical optical measurements, the wavefront data are recorded by pixelated imaging sensors. The closed-form analytical base polynomial will lose its orthogonality in the discrete wavefront database. For a wavefront with an irregularly shaped aperture, the corresponding analytical base polynomials are laboriously derived. The use of numerical orthogonal polynomials for reconstructing a wavefront with a general shaped aperture over the discrete data points is presented. Numerical polynomials are orthogonal over the discrete data points regardless of the boundary shape of the aperture. The performance of numerical orthogonal polynomials is confirmed by theoretical analysis and experiments. The results demonstrate the adaptability, validity, and accuracy of numerical orthogonal polynomials for estimating the wavefront over a general shaped aperture from regular boundary to an irregular boundary.

  3. Synthetic aperture radar target simulator

    NASA Technical Reports Server (NTRS)

    Zebker, H. A.; Held, D. N.; Goldstein, R. M.; Bickler, T. C.

    1984-01-01

    A simulator for simulating the radar return, or echo, from a target seen by a SAR antenna mounted on a platform moving with respect to the target is described. It includes a first-in first-out memory which has digital information clocked in at a rate related to the frequency of a transmitted radar signal and digital information clocked out with a fixed delay defining range between the SAR and the simulated target, and at a rate related to the frequency of the return signal. An RF input signal having a frequency similar to that utilized by a synthetic aperture array radar is mixed with a local oscillator signal to provide a first baseband signal having a frequency considerably lower than that of the RF input signal.

  4. Synthetic aperture interferometry: error analysis

    SciTech Connect

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  5. A fast tree-based method for estimating column densities in adaptive mesh refinement codes. Influence of UV radiation field on the structure of molecular clouds

    NASA Astrophysics Data System (ADS)

    Valdivia, Valeska; Hennebelle, Patrick

    2014-11-01

    Context. Ultraviolet radiation plays a crucial role in molecular clouds. Radiation and matter are tightly coupled and their interplay influences the physical and chemical properties of gas. In particular, modeling the radiation propagation requires calculating column densities, which can be numerically expensive in high-resolution multidimensional simulations. Aims: Developing fast methods for estimating column densities is mandatory if we are interested in the dynamical influence of the radiative transfer. In particular, we focus on the effect of the UV screening on the dynamics and on the statistical properties of molecular clouds. Methods: We have developed a tree-based method for a fast estimate of column densities, implemented in the adaptive mesh refinement code RAMSES. We performed numerical simulations using this method in order to analyze the influence of the screening on the clump formation. Results: We find that the accuracy for the extinction of the tree-based method is better than 10%, while the relative error for the column density can be much more. We describe the implementation of a method based on precalculating the geometrical terms that noticeably reduces the calculation time. To study the influence of the screening on the statistical properties of molecular clouds we present the probability distribution function of gas and the associated temperature per density bin and the mass spectra for different density thresholds. Conclusions: The tree-based method is fast and accurate enough to be used during numerical simulations since no communication is needed between CPUs when using a fully threaded tree. It is then suitable to parallel computing. We show that the screening for far UV radiation mainly affects the dense gas, thereby favoring low temperatures and affecting the fragmentation. We show that when we include the screening, more structures are formed with higher densities in comparison to the case that does not include this effect. We

  6. A systematic review of aperture shapes

    NASA Astrophysics Data System (ADS)

    Schultz, A. B.; Frazier, T. V.

    The paper discusses the application of apodization to reflecting telescopes. The diffraction pattern of a telescope, which is the image of a star, can be changed considerably by using different aperture shapes in combination with appropriately shaped occulting masks on the optical axis. Aperture shapes studied were the circular, square, and hexagonal. Polaris (α-UMin) was used as the test system.

  7. Stripe-shaped apertures in confocal microscopy.

    PubMed

    Shen, Shuhao; Zhu, Bingzhao; Zheng, Yao; Gong, Wei; Si, Ke

    2016-09-20

    We have theoretically verified that, compared with the aperture shapes of previous research, combining two stripe-shaped apertures in a confocal microscope with a finite-sized pinhole improves the axial resolution to a certain extent. Because different stripe shapes cause different effects, we also investigated the relationships among resolution, shapes, pinhole size, and the signal-to-background ratio.

  8. Vowel aperture and syllable segmentation in French.

    PubMed

    Goslin, Jeremy; Frauenfelder, Ulrich H

    2008-01-01

    The theories of Pulgram (1970) suggest that if the vowel of a French syllable is open then it will induce syllable segmentation responses that result in the syllable being closed, and vice versa. After the empirical verification that our target French-speaking population was capable of distinguishing between mid-vowel aperture, we examined the relationship between vowel and syllable aperture in two segmentation experiments. Initial findings from a metalinguistic repetition task supported the hypothesis, revealing significant segmentation differences due to vowel aperture across a range of bi-syllabic stimuli. These findings were also supported in an additional online experiment, in which a fragment detection task revealed a syllabic cross-over interaction due to vowel aperture. Evidence from these experiments suggest that multiple, independent cues are used in French syllable segmentation, including vowel aperture.

  9. Micro Ring Grating Spectrometer with Adjustable Aperture

    NASA Technical Reports Server (NTRS)

    Park, Yeonjoon (Inventor); King, Glen C. (Inventor); Elliott, James R. (Inventor); Choi, Sang H. (Inventor)

    2012-01-01

    A spectrometer includes a micro-ring grating device having coaxially-aligned ring gratings for diffracting incident light onto a target focal point, a detection device for detecting light intensity, one or more actuators, and an adjustable aperture device defining a circular aperture. The aperture circumscribes a target focal point, and directs a light to the detection device. The aperture device is selectively adjustable using the actuators to select a portion of a frequency band for transmission to the detection device. A method of detecting intensity of a selected band of incident light includes directing incident light onto coaxially-aligned ring gratings of a micro-ring grating device, and diffracting the selected band onto a target focal point using the ring gratings. The method includes using an actuator to adjust an aperture device and pass a selected portion of the frequency band to a detection device for measuring the intensity of the selected portion.

  10. Variable aperture collimator for high energy radiation

    DOEpatents

    Hill, Ronald A.

    1984-05-22

    An apparatus is disclosed providing a variable aperture energy beam collimator. A plurality of beam opaque blocks are in sliding interface edge contact to form a variable aperture. The blocks may be offset at the apex angle to provide a non-equilateral aperture. A plurality of collimator block assemblies may be employed for providing a channel defining a collimated beam. Adjacent assemblies are inverted front-to-back with respect to one another for preventing noncollimated energy from emerging from the apparatus. An adjustment mechanism comprises a cable attached to at least one block and a hand wheel mechanism for operating the cable. The blocks are supported by guide rods engaging slide brackets on the blocks. The guide rods are pivotally connected at each end to intermediate actuators supported on rotatable shafts to change the shape of the aperture. A divergent collimated beam may be obtained by adjusting the apertures of adjacent stages to be unequal.

  11. Speckle reduction in synthetic-aperture-radar imagery.

    PubMed

    Harvey, E R; April, G V

    1990-07-01

    Speckle appearing in synthetic-aperture-radar images degrades the information contained in these images. Speckle noise can be suppressed by adapted local processing techniques, permitting the definition of statistical parameters inside a small window centered on each pixel of the image. Two processing algorithms are examined; the first one uses the intensity as a variable, and the second one works on a homomorphic transformation of the image intensity. A statistical model for speckle noise that takes into account correlation in multilook imagery has been used to develop these processing algorithms. Several experimental results of processed Seasat-A syntheticaperture-radar images are discussed.

  12. Speckle reduction in synthetic-aperture-radar imagery.

    PubMed

    Harvey, E R; April, G V

    1990-07-01

    Speckle appearing in synthetic-aperture-radar images degrades the information contained in these images. Speckle noise can be suppressed by adapted local processing techniques, permitting the definition of statistical parameters inside a small window centered on each pixel of the image. Two processing algorithms are examined; the first one uses the intensity as a variable, and the second one works on a homomorphic transformation of the image intensity. A statistical model for speckle noise that takes into account correlation in multilook imagery has been used to develop these processing algorithms. Several experimental results of processed Seasat-A syntheticaperture-radar images are discussed. PMID:19768064

  13. Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 2: Appendix

    NASA Technical Reports Server (NTRS)

    Wiley, C. A.; Chang, M. U.

    1981-01-01

    A number of topics supporting the systems analysis of a multifrequency aperture-synthesizing microwave radiometer system are discussed. Fellgett's (multiple) advantage, interferometer mapping behavior, mapping geometry, image processing programs, and sampling errors are among the topics discussed. A FORTRAN program code is given.

  14. Adaptive Thresholds

    SciTech Connect

    Bremer, P. -T.

    2014-08-26

    ADAPT is a topological analysis code that allow to compute local threshold, in particular relevance based thresholds for features defined in scalar fields. The initial target application is vortex detection but the software is more generally applicable to all threshold based feature definitions.

  15. Three dimensional digital holographic aperture synthesis.

    PubMed

    Crouch, Stephen; Kaylor, Brant M; Barber, Zeb W; Reibel, Randy R

    2015-09-01

    Aperture synthesis techniques are applied to temporally and spatially diverse digital holograms recorded with a fast focal-plane array. Because the technique fully resolves the downrange dimension using wide-bandwidth FMCW linear-chirp waveforms, extremely high resolution three dimensional (3D) images can be obtained even at very long standoff ranges. This allows excellent 3D image formation even when targets have significant structure or discontinuities, which are typically poorly rendered with multi-baseline synthetic aperture ladar or multi-wavelength holographic aperture ladar approaches. The background for the system is described and system performance is demonstrated through both simulation and experiments. PMID:26368474

  16. Scalar wave diffraction from a circular aperture

    SciTech Connect

    Cerjan, C.

    1995-01-25

    The scalar wave theory is used to evaluate the expected diffraction patterns from a circular aperture. The standard far-field Kirchhoff approximation is compared to the exact result expressed in terms of oblate spheroidal harmonics. Deviations from an expanding spherical wave are calculated for circular aperture radius and the incident beam wavelength using suggested values for a recently proposed point diffractin interferometer. The Kirchhoff approximation is increasingly reliable in the far-field limit as the aperture radius is increased, although significant errors in amplitude and phase persist.

  17. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  18. Distributed apertures in laminar flow laser turrets

    NASA Astrophysics Data System (ADS)

    Tousley, B. B.

    1981-09-01

    Assume a technology that permits undistorted laser beam propagation from the aft section of a streamlined turret. A comparison of power on a distant airborne target is made between a single aperture in a large scale streamlined turret with a turbulent boundary layer and various arrays of apertures in small scale streamlined turrets with laminar flow. The array performance is mainly limited by the size of each aperture. From an array one might expect, at best, about 40 percent as much power on the target as from a single aperture with equal area. Since the turbulent boundary layer on the large single-turret has negligible effect on beam quality, the array would be preferred (if all development efforts were essentially equal) only if a laminar wake is an operational requirement.

  19. Very Large Aperture Diffractive Space Telescope

    SciTech Connect

    Hyde, Roderick Allen

    1998-04-20

    A very large (10's of meters) aperture space telescope including two separate spacecraft--an optical primary functioning as a magnifying glass and an optical secondary functioning as an eyepiece. The spacecraft are spaced up to several kilometers apart with the eyepiece directly behind the magnifying glass ''aiming'' at an intended target with their relative orientation determining the optical axis of the telescope and hence the targets being observed. The magnifying glass includes a very large-aperture, very-thin-membrane, diffractive lens, e.g., a Fresnel lens, which intercepts incoming light over its full aperture and focuses it towards the eyepiece. The eyepiece has a much smaller, meter-scale aperture and is designed to move along the focal surface of the magnifying glass, gathering up the incoming light and converting it to high quality images. The positions of the two space craft are controlled both to maintain a good optical focus and to point at desired targets.

  20. Shock wave absorber having apertured plate

    DOEpatents

    Shin, Y.W.; Wiedermann, A.H.; Ockert, C.E.

    1983-08-26

    The shock or energy absorber disclosed herein utilizes an apertured plate maintained under the normal level of liquid flowing in a piping system and disposed between the normal liquid flow path and a cavity pressurized with a compressible gas. The degree of openness (or porosity) of the plate is between 0.01 and 0.60. The energy level of a shock wave travelling down the piping system thus is dissipated by some of the liquid being jetted through the apertured plate toward the cavity. The cavity is large compared to the quantity of liquid jetted through the apertured plate, so there is little change in its volume. The porosity of the apertured plate influences the percentage of energy absorbed.

  1. Shock wave absorber having apertured plate

    DOEpatents

    Shin, Yong W.; Wiedermann, Arne H.; Ockert, Carl E.

    1985-01-01

    The shock or energy absorber disclosed herein utilizes an apertured plate maintained under the normal level of liquid flowing in a piping system and disposed between the normal liquid flow path and a cavity pressurized with a compressible gas. The degree of openness (or porosity) of the plate is between 0.01 and 0.60. The energy level of a shock wave travelling down the piping system thus is dissipated by some of the liquid being jetted through the apertured plate toward the cavity. The cavity is large compared to the quantity of liquid jetted through the apertured plate, so there is little change in its volume. The porosity of the apertured plate influences the percentage of energy absorbed.

  2. Synthetic Aperture Radar Missions Study Report

    NASA Technical Reports Server (NTRS)

    Bard, S.

    2000-01-01

    This report reviews the history of the LightSAR project and summarizes actions the agency can undertake to support industry-led efforts to develop an operational synthetic aperture radar (SAR) capability in the United States.

  3. Large aperture ac interferometer for optical testing.

    PubMed

    Moore, D T; Murray, R; Neves, F B

    1978-12-15

    A 20-cm clear aperture modified Twyman-Green interferometer is described. The system measures phase with an AC technique called phase-lock interferometry while scanning the aperture with a dual galvanometer scanning system. Position information and phase are stored in a minicomputer with disk storage. This information is manipulated with associated software, and the wavefront deformation due to a test component is graphically displayed in perspective and contour on a CRT terminal. PMID:20208642

  4. Resonant Effects in Nanoscale Bowtie Apertures.

    PubMed

    Ding, Li; Qin, Jin; Guo, Songpo; Liu, Tao; Kinzel, Edward; Wang, Liang

    2016-01-01

    Nanoscale bowtie aperture antennas can be used to focus light well below the diffraction limit with extremely high transmission efficiencies. This paper studies the spectral dependence of the transmission through nanoscale bowtie apertures defined in a silver film. A realistic bowtie aperture is numerically modeled using the Finite Difference Time Domain (FDTD) method. Results show that the transmission spectrum is dominated by Fabry-Pérot (F-P) waveguide modes and plasmonic modes. The F-P resonance is sensitive to the thickness of the film and the plasmonic resonant mode is closely related to the gap distance of the bowtie aperture. Both characteristics significantly affect the transmission spectrum. To verify these numerical results, bowtie apertures are FIB milled in a silver film. Experimental transmission measurements agree with simulation data. Based on this result, nanoscale bowtie apertures can be optimized to realize deep sub-wavelength confinement with high transmission efficiency with applications to nanolithography, data storage, and bio-chemical sensing. PMID:27250995

  5. Resonant Effects in Nanoscale Bowtie Apertures

    PubMed Central

    Ding, Li; Qin, Jin; Guo, Songpo; Liu, Tao; Kinzel, Edward; Wang, Liang

    2016-01-01

    Nanoscale bowtie aperture antennas can be used to focus light well below the diffraction limit with extremely high transmission efficiencies. This paper studies the spectral dependence of the transmission through nanoscale bowtie apertures defined in a silver film. A realistic bowtie aperture is numerically modeled using the Finite Difference Time Domain (FDTD) method. Results show that the transmission spectrum is dominated by Fabry-Pérot (F-P) waveguide modes and plasmonic modes. The F-P resonance is sensitive to the thickness of the film and the plasmonic resonant mode is closely related to the gap distance of the bowtie aperture. Both characteristics significantly affect the transmission spectrum. To verify these numerical results, bowtie apertures are FIB milled in a silver film. Experimental transmission measurements agree with simulation data. Based on this result, nanoscale bowtie apertures can be optimized to realize deep sub-wavelength confinement with high transmission efficiency with applications to nanolithography, data storage, and bio-chemical sensing. PMID:27250995

  6. Application of a geocentrifuge and sterolithographically fabricated apertures to multiphase flow in complex fracture apertures.

    SciTech Connect

    Glenn E. McCreery; Robert D. Stedtfeld; Alan T. Stadler; Daphne L. Stoner; Paul Meakin

    2005-09-01

    A geotechnical centrifuge was used to investigate unsaturated multiphase fluid flow in synthetic fracture apertures under a variety of flow conditions. The geocentrifuge subjected the fluids to centrifugal forces allowing the Bond number to be systematically changed without adjusting the fracture aperture of the fluids. The fracture models were based on the concept that surfaces generated by the fracture of brittle geomaterials have a self-affine fractal geometry. The synthetic fracture surfaces were fabricated from a transparent epoxy photopolymer using sterolithography, and fluid flow through the transparent fracture models was monitored by an optical image acquisition system. Aperture widths were chosen to be representative of the wide range of geological fractures in the vesicular basalt that lies beneath the Idaho Nation Laboratory (INL). Transitions between different flow regimes were observed as the acceleration was changed under constant flow conditions. The experiments showed the transition between straight and meandering rivulets in smooth walled apertures (aperture width = 0.508 mm), the dependence of the rivulet width on acceleration in rough walled fracture apertures (average aperture width = 0.25 mm), unstable meandering flow in rough walled apertures at high acceleration (20g) and the narrowing of the wetted region with increasing acceleration during the penetration of water into an aperture filled with wetted particles (0.875 mm diameter glass spheres).

  7. Research of aluminium alloy aerospace structure aperture measurement based on 3D digital speckle correlation method

    NASA Astrophysics Data System (ADS)

    Bai, Lu; Wang, Hongbo; Zhou, Jiangfan; Yang, Rong; Zhang, Hui

    2014-11-01

    In this paper, the aperture change of the aluminium alloy aerospace structure under real load is researched. Static experiments are carried on which is simulated the load environment of flight course. Compared with the traditional methods, through experiments results, it's proved that 3D digital speckle correlation method has good adaptability and precision on testing aperture change, and it can satisfy measurement on non-contact,real-time 3D deformation or stress concentration. The test results of new method is compared with the traditional method.

  8. How do i fit through that gap? Navigation through apertures in adults with and without developmental coordination disorder.

    PubMed

    Wilmut, Kate; Du, Wenchong; Barnett, Anna L

    2015-01-01

    During everyday life we move around busy environments and encounter a range of obstacles, such as a narrow aperture forcing us to rotate our shoulders in order to pass through. In typically developing individuals the decision to rotate the shoulders is body scaled and this movement adaptation is temporally and spatially tailored to the size of the aperture. This is done effortlessly although it actually involves many complex skills. For individuals with Developmental Coordination Disorder (DCD) moving in a busy environment and negotiating obstacles presents a real challenge which can negatively impact on safety and participation in motor activities in everyday life. However, we have a limited understanding of the nature of the difficulties encountered. Therefore, this current study considered how adults with DCD make action judgements and movement adaptations while navigating apertures. Fifteen adults with DCD and 15 typically developing (TD) controls passed through a series of aperture sizes which were scaled to body size (0.9-2.1 times shoulder width). Spatial and temporal characteristics of movement were collected over the approach phase and while crossing the aperture. The decision to rotate the shoulders was not scaled in the same way for the two groups, with the adults with DCD showing a greater propensity to turn for larger apertures compared to the TD adults when body size alone was accounted for. However, when accounting for degree of lateral trunk movement and variability on the approach, we no longer saw differences between the two groups. In terms of the movement adaptations, the adults with DCD approached an aperture differently when a shoulder rotation was required and then adapted their movement sooner compared to their typical peers. These results point towards an adaptive strategy in adults with DCD which allows them to account for their movement difficulties and avoid collision.

  9. Temporal Coding of Volumetric Imagery

    NASA Astrophysics Data System (ADS)

    Llull, Patrick Ryan

    'Image volumes' refer to realizations of images in other dimensions such as time, spectrum, and focus. Recent advances in scientific, medical, and consumer applications demand improvements in image volume capture. Though image volume acquisition continues to advance, it maintains the same sampling mechanisms that have been used for decades; every voxel must be scanned and is presumed independent of its neighbors. Under these conditions, improving performance comes at the cost of increased system complexity, data rates, and power consumption. This dissertation explores systems and methods capable of efficiently improving sensitivity and performance for image volume cameras, and specifically proposes several sampling strategies that utilize temporal coding to improve imaging system performance and enhance our awareness for a variety of dynamic applications. Video cameras and camcorders sample the video volume (x,y,t) at fixed intervals to gain understanding of the volume's temporal evolution. Conventionally, one must reduce the spatial resolution to increase the framerate of such cameras. Using temporal coding via physical translation of an optical element known as a coded aperture, the compressive temporal imaging (CACTI) camera emonstrates a method which which to embed the temporal dimension of the video volume into spatial (x,y) measurements, thereby greatly improving temporal resolution with minimal loss of spatial resolution. This technique, which is among a family of compressive sampling strategies developed at Duke University, temporally codes the exposure readout functions at the pixel level. Since video cameras nominally integrate the remaining image volume dimensions (e.g. spectrum and focus) at capture time, spectral (x,y,t,lambda) and focal (x,y,t,z) image volumes are traditionally captured via sequential changes to the spectral and focal state of the system, respectively. The CACTI camera's ability to embed video volumes into images leads to exploration

  10. Aperture effects in squid jet propulsion.

    PubMed

    Staaf, Danna J; Gilly, William F; Denny, Mark W

    2014-05-01

    Squid are the largest jet propellers in nature as adults, but as paralarvae they are some of the smallest, faced with the inherent inefficiency of jet propulsion at a low Reynolds number. In this study we describe the behavior and kinematics of locomotion in 1 mm paralarvae of Dosidicus gigas, the smallest squid yet studied. They swim with hop-and-sink behavior and can engage in fast jets by reducing the size of the mantle aperture during the contraction phase of a jetting cycle. We go on to explore the general effects of a variable mantle and funnel aperture in a theoretical model of jet propulsion scaled from the smallest (1 mm mantle length) to the largest (3 m) squid. Aperture reduction during mantle contraction increases propulsive efficiency at all squid sizes, although 1 mm squid still suffer from low efficiency (20%) because of a limited speed of contraction. Efficiency increases to a peak of 40% for 1 cm squid, then slowly declines. Squid larger than 6 cm must either reduce contraction speed or increase aperture size to maintain stress within maximal muscle tolerance. Ecological pressure to maintain maximum velocity may lead them to increase aperture size, which reduces efficiency. This effect might be ameliorated by nonaxial flow during the refill phase of the cycle. Our model's predictions highlight areas for future empirical work, and emphasize the existence of complex behavioral options for maximizing efficiency at both very small and large sizes.

  11. Approaching real-time terahertz imaging using photo-induced reconfigurable aperture arrays

    NASA Astrophysics Data System (ADS)

    Shams, Md. Itrat Bin; Jiang, Zhenguo; Rahman, Syed; Qayyum, Jubaid; Hesler, Jeffrey L.; Cheng, Li-Jing; Xing, Huili Grace; Fay, Patrick; Liu, Lei

    2014-05-01

    We report a technique using photo-induced coded-aperture arrays for potential real-time THz imaging at roomtemperature. The coded apertures (based on Hadamard coding) were implemented using programmable illumination on semi-insulating Silicon wafer by a commercial digital-light processing (DLP) projector. Initial imaging experiments were performed in the 500-750 GHz band using a WR-1.5 vector network analyzer (VNA) as the source and receiver. Over the entire band, each array pixel can be optically turned on and off with an average modulation depth of ~20 dB and ~35 dB, for ~4 cm2 and ~0.5 cm2 imaging areas respectively. The modulation speed is ~1.3 kHz using the current DLP system and data acquisition software. Prototype imaging demonstrations have shown that a 256-pixel image can be obtained in the order of 10 seconds using compressed sensing (CS), and this speed can be improved greatly for potential real-time or video-rate THz imaging. This photo-induced coded-aperture imaging (PI-CAI) technique has been successfully applied to characterize THz beams in quasi-optical systems and THz horn antennas.

  12. Task 3: PNNL Visit by JAEA Researchers to Participate in TODAM Code Applications to Fukushima Rivers and to Evaluate the Feasibility of Adaptation of FLESCOT Code to Simulate Radionuclide Transport in the Pacific Ocean Coastal Water Around Fukushima

    SciTech Connect

    Onishi, Yasuo

    2013-03-29

    Four JAEA researchers visited PNNL for two weeks in February, 2013 to learn the PNNL-developed, unsteady, one-dimensional, river model, TODAM and the PNNL-developed, time-dependent, three dimensional, coastal water model, FLESCOT. These codes predict sediment and contaminant concentrations by accounting sediment-radionuclide interactions, e.g., adsorption/desorption and transport-deposition-resuspension of sediment-sorbed radionuclides. The objective of the river and coastal water modeling is to simulate • 134Cs and 137Cs migration in Fukushima rivers and the coastal water, and • their accumulation in the river and ocean bed along the Fukushima coast. Forecasting the future cesium behavior in the river and coastal water under various scenarios would enable JAEA to assess the effectiveness of various on-land remediation activities and if required, possible river and coastal water clean-up operations to reduce the contamination of the river and coastal water, agricultural products, fish and other aquatic biota. PNNL presented the following during the JAEA visit to PNNL: • TODAM and FLESCOT’s theories and mathematical formulations • TODAM and FLESCOT model structures • Past TODAM and FLESCOT applications • Demonstrating these two codes' capabilities by applying them to simple hypothetical river and coastal water cases. • Initial application of TODAM to the Ukedo River in Fukushima and JAEA researchers' participation in its modeling. PNNL also presented the relevant topics relevant to Fukushima environmental assessment and remediation, including • PNNL molecular modeling and EMSL computer facilities • Cesium adsorption/desorption characteristics • Experiences of connecting molecular science research results to macro model applications to the environment • EMSL tour • Hanford Site road tour. PNNL and JAEA also developed future course of actions for joint research projects on the Fukushima environmental and remediation assessments.

  13. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding

    PubMed Central

    Malmierca, Manuel S.; Anderson, Lucy A.; Antunes, Flora M.

    2015-01-01

    To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex. PMID:25805974

  14. Solar energy apparatus with apertured shield

    NASA Technical Reports Server (NTRS)

    Collings, Roger J. (Inventor); Bannon, David G. (Inventor)

    1989-01-01

    A protective apertured shield for use about an inlet to a solar apparatus which includesd a cavity receiver for absorbing concentrated solar energy. A rigid support truss assembly is fixed to the periphery of the inlet and projects radially inwardly therefrom to define a generally central aperture area through which solar radiation can pass into the cavity receiver. A non-structural, laminated blanket is spread over the rigid support truss in such a manner as to define an outer surface area and an inner surface area diverging radially outwardly from the central aperture area toward the periphery of the inlet. The outer surface area faces away from the inlet and the inner surface area faces toward the cavity receiver. The laminated blanket includes at least one layer of material, such as ceramic fiber fabric, having high infra-red emittance and low solar absorption properties, and another layer, such as metallic foil, of low infra-red emittance properties.

  15. Axial superresolution by synthetic aperture generation

    NASA Astrophysics Data System (ADS)

    Micó, V.; García, J.; Zalevsky, Z.

    2008-12-01

    The use of tilted illumination onto the input object in combination with time multiplexing is a useful technique to overcome the Abbe diffraction limit in imaging systems. It is based on the generation of an expanded synthetic aperture that improves the cutoff frequency (and thus the resolution limit) of the imaging system. In this paper we present an experimental validation of the fact that the generation of a synthetic aperture improves not only the lateral resolution but also the axial one. Thus, it is possible to achieve higher optical sectioning of three-dimensional (3D) objects than that defined by the theoretical resolution limit imposed by diffraction. Experimental results are provided for two different cases: a synthetic object (micrometer slide) imaged by a 0.14 numerical aperture (NA) microscope lens, and a biosample (swine sperm cells) imaged by a 0.42 NA objective.

  16. Comparison of binocular through-focus visual acuity with monovision and a small aperture inlay.

    PubMed

    Schwarz, Christina; Manzanera, Silvestre; Prieto, Pedro M; Fernández, Enrique J; Artal, Pablo

    2014-10-01

    Corneal small aperture inlays provide extended depth of focus as a solution to presbyopia. As this procedure is becoming more popular, it is interesting to compare its performance with traditional approaches, such as monovision. Here, binocular visual acuity was measured as a function of object vergence in three subjects by using a binocular adaptive optics vision analyzer. Visual acuity was measured at two luminance levels (photopic and mesopic) under several optical conditions: 1) natural vision (4 mm pupils, best corrected distance vision), 2) pure-defocus monovision ( + 1.25 D add in the nondominant eye), 3) small aperture monovision (1.6 mm pupil in the nondominant eye), and 4) combined small aperture and defocus monovision (1.6 mm pupil and a + 0.75 D add in the nondominant eye). Visual simulations of a small aperture corneal inlay suggest that the device extends DOF as effectively as traditional monovision in photopic light, in both cases at the cost of binocular summation. However, individual factors, such as aperture centration or sensitivity to mesopic conditions should be considered to assure adequate visual outcomes. PMID:25360355

  17. Comparison of binocular through-focus visual acuity with monovision and a small aperture inlay

    PubMed Central

    Schwarz, Christina; Manzanera, Silvestre; Prieto, Pedro M.; Fernández, Enrique J.; Artal, Pablo

    2014-01-01

    Corneal small aperture inlays provide extended depth of focus as a solution to presbyopia. As this procedure is becoming more popular, it is interesting to compare its performance with traditional approaches, such as monovision. Here, binocular visual acuity was measured as a function of object vergence in three subjects by using a binocular adaptive optics vision analyzer. Visual acuity was measured at two luminance levels (photopic and mesopic) under several optical conditions: 1) natural vision (4 mm pupils, best corrected distance vision), 2) pure-defocus monovision ( + 1.25 D add in the nondominant eye), 3) small aperture monovision (1.6 mm pupil in the nondominant eye), and 4) combined small aperture and defocus monovision (1.6 mm pupil and a + 0.75 D add in the nondominant eye). Visual simulations of a small aperture corneal inlay suggest that the device extends DOF as effectively as traditional monovision in photopic light, in both cases at the cost of binocular summation. However, individual factors, such as aperture centration or sensitivity to mesopic conditions should be considered to assure adequate visual outcomes. PMID:25360355

  18. Novel multi-aperture approach for miniaturized imaging systems

    NASA Astrophysics Data System (ADS)

    Wippermann, F. C.; Brückner, A.; Oberdörster, A.; Reimann, A.

    2016-03-01

    The vast majority of cameras and imaging sensors relies on the identical single aperture optics principle with the human eye as natural antetype. Multi-aperture approaches - in natural systems so called compound eyes and in technology often referred to as array-cameras have advantages in terms of miniaturization, simplicity of the optics and additional features such as depth information and refocusing enabled by the computational manipulation of the systeḿs raw image data. The proposed imaging principle is based on a multitude of imaging channels transmitting different parts of the entire field of view. Adapted image processing algorithms are employed for the generation of the overall image by the stitching of the images of the different channels. The restriction of the individual channeĺs field of view leads to a less complex optical system targeting reduced fabrication cost. Due to a novel, linear morphology of the array camera setup, depth mapping with improved resolution can be achieved. We introduce a novel concept for miniaturized array-cameras with several mega pixel resolution targeting high volume applications in mobile and automotive imaging with improved depth mapping and explain design and fabrication aspects.

  19. PDII- Additional discussion of the dynamic aperture

    SciTech Connect

    Norman M. Gelfand

    2002-07-23

    This note is in the nature of an addition to the dynamic aperture calculations found in the report on the Proton Driver, FERMILAB-TM-2169. A extensive discussion of the Proton Driver lattice, as well as the nomenclature used to describe it can be found in TM-2169. Basically the proposed lattice is a racetrack design with the two arcs joined by two long straight sections. The straight sections are dispersion free. Tracking studies were undertaken with the objective of computing the dynamic aperture for the lattice and some of the results have been incorporated into TM-2169. This note is a more extensive report of those calculations.

  20. Synthetic aperture radar capabilities in development

    SciTech Connect

    Miller, M.

    1994-11-15

    The Imaging and Detection Program (IDP) within the Laser Program is currently developing an X-band Synthetic Aperture Radar (SAR) to support the Joint US/UK Radar Ocean Imaging Program. The radar system will be mounted in the program`s Airborne Experimental Test-Bed (AETB), where the initial mission is to image ocean surfaces and better understand the physics of low grazing angle backscatter. The Synthetic Aperture Radar presentation will discuss its overall functionality and a brief discussion on the AETB`s capabilities. Vital subsystems including radar, computer, navigation, antenna stabilization, and SAR focusing algorithms will be examined in more detail.

  1. Synthesis aperture femtosecond-pulsed digital holography

    NASA Astrophysics Data System (ADS)

    Zhu, Linwei; Sun, Meiyu; Chen, Jiannong; Yu, Yongjiang; Zhou, Changhe

    2013-09-01

    A new aperture-synthesis approach in femtosecond-pulse digital holography for obtaining a high-resolution and a whole field of view of the reconstructed image is proposed. The subholograms are recorded only by delay scanning holograms that have different delay times between the object and reference beams. In addition, by using image processing techniques, the synthesis aperture digital hologram can be superposed accurately. Analysis and experimental results show that the walk-off in femtosecond off-axis digital holography caused by low coherent can be well eliminated. The resolution and the field of view of the reconstructed image can be improved effectively.

  2. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or

  3. Spatially variant apodization for squinted synthetic aperture radar images.

    PubMed

    Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo

    2007-08-01

    Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.

  4. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  5. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  6. Including outer scale effects in zonal adaptive optics calculations.

    PubMed

    Ellerbroek, B L

    1997-12-20

    Mellin transform techniques are applied to evaluate the covariance of the integrated turbulence-induced phase distortions along a pair of ray paths through the atmosphere from two points in a telescope aperture to a pair of sources at finite or infinite range. The derivation is for the case of a finite outer scale and a von Karman turbulence spectrum. The Taylor hypothesis is assumed if the two phase distortions are evaluated at two different times and amplitude scintillation effects are neglected. The resulting formula for the covariance is a power series in one variable for the case of a fixed atmospheric wind velocity profile and a power series in two variables for a fixed wind-speed profile with a random and uniformly distributed wind direction. These formulas are computationally efficient and can be easily integrated into computer codes for the numerical evaluation of adaptive optics system performance. Sample numerical results are presented to illustrate the effect of a finite outer scale on the performance of natural and laser guide star adaptive optics systems for an 8-m astronomical telescope. A hypothetical outer scale of 10 m significantly reduces the magnitude of tilt anisoplanatism, thereby improving the performance of a laser guide star adaptive optics system if the auxiliary natural star used for full-aperture tip/tilt sensing is offset from the science field. The reduction in higher-order anisoplanatism that is due to a 10-m outer scale is smaller, and the off-axis performance of a natural guide star adaptive optics system is not significantly improved.

  7. Radiation safety considerations in proton aperture disposal.

    PubMed

    Walker, Priscilla K; Edwards, Andrew C; Das, Indra J; Johnstone, Peter A S

    2014-04-01

    Beam shaping in scattered and uniform scanned proton beam therapy (PBT) is made commonly by brass apertures. Due to proton interactions, these devices become radioactive and could pose safety issues and radiation hazards. Nearly 2,000 patient-specific devices per year are used at Indiana University Cyclotron Operations (IUCO) and IU Health Proton Therapy Center (IUHPTC); these devices require proper guidelines for disposal. IUCO practice has been to store these apertures for at least 4 mo to allow for safe transfer to recycling contractors. The devices require decay in two staged secure locations, including at least 4 mo in a separate building, at which point half are ready for disposal. At 6 mo, 20-30% of apertures require further storage. This process requires significant space and manpower and should be considered in the design process for new clinical facilities. More widespread adoption of pencil beam or spot scanning nozzles may obviate this issue, as apertures then will no longer be necessary.

  8. Aperture synthesis imaging from the moon

    NASA Technical Reports Server (NTRS)

    Burns, Jack O.

    1991-01-01

    Four candidate imaging aperture synthesis concepts are described for possible emplacement on the moon beginning in the next decade. These include an optical interferometer with 10 microarcsec resolution, a submillimeter array with 6 milliarcsec resolution, a moon-earth VLBI experiment, and a very low frequency interferometer in lunar orbit.

  9. Clutter free synthetic aperture radar correlator

    NASA Technical Reports Server (NTRS)

    Jain, A.

    1977-01-01

    A synthetic aperture radar correlation system including a moving diffuser located at the image plane of a radar processor is described. The output of the moving diffuser is supplied to a lens whose impulse response is at least as wide as that of the overall processing system. A significant reduction in clutter results is given.

  10. A modular approach toward extremely large apertures

    NASA Astrophysics Data System (ADS)

    Woods, A. A., Jr.

    1981-02-01

    Modular antenna construction can provide a significant increase in reflector aperture size over deployable reflectors. The modular approach allows reflective mesh surfaces to be supported by a minimum of structure. The kinematics of the selected deployable design approach were validated by the subscale demonstration model. Further design refinements on the module structural/joints and design optimization on intermodule joints are needed.

  11. Depolarization by high-aperture focusing

    NASA Astrophysics Data System (ADS)

    Bahlmann, Karsten; Hell, Stefan W.

    2002-05-01

    We propose and demonstrate a method employing ferroelectric monomolecular layers, by which it is possible to precisely measure the planar light field polarization in the focus of a lens. This method allowed us to establish for the first time to our knowledge, the perpendicularly oriented field that is anticipated at high apertures. For a numerical aperture 1.4 oil immersion lens illuminated with linearly polarized plane waves, the integral of the modulus square of the perpendicular component amounts to (1.51r0.2) % of that of the initial polarization. It is experimentally proven that depolarization decreases with decreasing aperture angle and increases when using annular apertures. Annuli formed by a central obstruction with a diameter of 89 % of that of the entrance pupil raise the integral to 5.5 %. This compares well with the value of 5.8% predicted by electromagnetic focusing theory; however, the depolarization is also due to imperfections connected with focusing by refraction. Besides fluorescence microscopy and single molecule spectroscopy, the measured intensity of the depolarized component in the focal plane is relevant to all forms of light spectroscopy combining strong focusing with polarization analysis.

  12. Agile multiple aperture imager receiver development

    NASA Astrophysics Data System (ADS)

    Lees, David E. B.; Dillon, Robert F.

    1990-02-01

    A variety of unconventional imaging schemes have been investigated in recent years that rely on small, unphased optical apertures (subaperture) to measure properties of an incoming optical wavefront and recover images of distant objects without using precisely figured, large aperture optical elements. Such schemes offer several attractive features. They provide the potential to create very lare effective aperture that are expandable over time and can be launched into space in small pieces. Since the subapertures are identical in construction, they may be mass producible at potentially low cost. A preliminary design for a practical low cost optical receiver is presented. The multiple aperture design has high sensitivity, wide field-of-view, and is lightweight. A combination of spectral, temporal, and spatial background suppression are used to achieve daytime operation at low signal levels. Modular packaging to make the number of receiver subapertures conveniently scalable is also presented. The design is appropriate to a ground-base proof-of-concept experiment for long range active speckle imaging.

  13. Interdisciplinary science with large aperture detectors

    NASA Astrophysics Data System (ADS)

    Wiencke, Lawrence

    2013-06-01

    Large aperture detector systems to measure high energy cosmic rays also offer unique opportunities in other areas of science. Disciplines include geophysics such as seismic and volcanic activity, and atmospheric science ranging from clouds to lightning to aerosols to optical transients. This paper will discuss potential opportunities based on the ongoing experience of the Pierre Auger Observatory.

  14. Radiation safety considerations in proton aperture disposal.

    PubMed

    Walker, Priscilla K; Edwards, Andrew C; Das, Indra J; Johnstone, Peter A S

    2014-04-01

    Beam shaping in scattered and uniform scanned proton beam therapy (PBT) is made commonly by brass apertures. Due to proton interactions, these devices become radioactive and could pose safety issues and radiation hazards. Nearly 2,000 patient-specific devices per year are used at Indiana University Cyclotron Operations (IUCO) and IU Health Proton Therapy Center (IUHPTC); these devices require proper guidelines for disposal. IUCO practice has been to store these apertures for at least 4 mo to allow for safe transfer to recycling contractors. The devices require decay in two staged secure locations, including at least 4 mo in a separate building, at which point half are ready for disposal. At 6 mo, 20-30% of apertures require further storage. This process requires significant space and manpower and should be considered in the design process for new clinical facilities. More widespread adoption of pencil beam or spot scanning nozzles may obviate this issue, as apertures then will no longer be necessary. PMID:24562073

  15. RF Performance of Membrane Aperture Shells

    NASA Technical Reports Server (NTRS)

    Flint, Eirc M.; Lindler, Jason E.; Thomas, David L.; Romanofsky, Robert

    2007-01-01

    This paper provides an overview of recent results establishing the suitability of Membrane Aperture Shell Technology (MAST) for Radio Frequency (RF) applications. These single surface shells are capable of maintaining their figure with no preload or pressurization and minimal boundary support, yet can be compactly roll stowed and passively self deploy. As such, they are a promising technology for enabling a future generation of RF apertures. In this paper, we review recent experimental and numerical results quantifying suitable RF performance. It is shown that candidate materials possess metallic coatings with sufficiently low surface roughness and that these materials can be efficiently fabricated into RF relevant doubly curved shapes. A numerical justification for using a reflectivity metric, as opposed to the more standard RF designer metric of skin depth, is presented and the resulting ability to use relatively thin coating thickness is experimentally validated with material sample tests. The validity of these independent film sample measurements are then confirmed through experimental results measuring RF performance for reasonable sized doubly curved apertures. Currently available best results are 22 dBi gain at 3 GHz (S-Band) for a 0.5m aperture tested in prime focus mode, 28dBi gain for the same antenna in the C-Band (4 to 6 GHz), and 36.8dBi for a smaller 0.25m antenna tested at 32 GHz in the Ka-Band. RF range test results for a segmented aperture (one possible scaling approach) are shown as well. Measured antenna system actual efficiencies (relative to the unachievable) ideal for these on axis tests are generally quite good, typically ranging from 50 to 90%.

  16. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  17. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  18. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  19. Vacuum aperture isolator for retroreflection from laser-irradiated target

    DOEpatents

    Benjamin, Robert F.; Mitchell, Kenneth B.

    1980-01-01

    The disclosure is directed to a vacuum aperture isolator for retroreflection of a laser-irradiated target. Within a vacuum chamber are disposed a beam focusing element, a disc having an aperture and a recollimating element. The edge of the focused beam impinges on the edge of the aperture to produce a plasma which refracts any retroreflected light from the laser's target.

  20. Dual aperture dipole magnet with second harmonic component

    DOEpatents

    Praeg, Walter F.

    1985-01-01

    An improved dual aperture dipole electromagnet includes a second-harmonic frequency magnetic guide field winding which surrounds first harmonic frequency magnetic guide field windings associated with each aperture. The second harmonic winding and the first harmonic windings cooperate to produce resultant magnetic waveforms in the apertures which have extended acceleration and shortened reset portions of electromagnet operation.

  1. Dual aperture dipole magnet with second harmonic component

    DOEpatents

    Praeg, W.F.

    1983-08-31

    An improved dual aperture dipole electromagnet includes a second-harmonic frequency magnetic guide field winding which surrounds first harmonic frequency magnetic guide field windings associated with each aperture. The second harmonic winding and the first harmonic windings cooperate to produce resultant magnetic waveforms in the apertures which have extended acceleration and shortened reset portions of electromagnet operation.

  2. Multiple component codes based generalized LDPC codes for high-speed optical transport.

    PubMed

    Djordjevic, Ivan B; Wang, Ting

    2014-07-14

    A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.

  3. Development of a resettable, flexible aperture cover

    NASA Technical Reports Server (NTRS)

    Christiansen, Scott

    1992-01-01

    A flexible aperture cover and latch were developed for the Thermal Ion Detection Experiment (TIDE). The latch utilized a high-output paraffin (HOP) linear motor to supply the force to operate the latch. The initial approach for the cover was to use a heat-treated, coiled strip of 0.05 mm (.002-inch)-thick beryllium-copper as the cover. Development test results showed that one end of the cover developed a trajectory during release that threatened to impact against adjacent instruments. An alternative design utilizing constant force springs and a flexible, metallized Kapton cover was then tested. Results from development tests, microgravity tests, and lessons learned during the development of the aperture cover are discussed.

  4. Compact high precision adjustable beam defining aperture

    DOEpatents

    Morton, Simon A; Dickert, Jeffrey

    2013-07-02

    The present invention provides an adjustable aperture for limiting the dimension of a beam of energy. In an exemplary embodiment, the aperture includes (1) at least one piezoelectric bender, where a fixed end of the bender is attached to a common support structure via a first attachment and where a movable end of the bender is movable in response to an actuating voltage applied to the bender and (2) at least one blade attached to the movable end of the bender via a second attachment such that the blade is capable of impinging upon the beam. In an exemplary embodiment, the beam of energy is electromagnetic radiation. In an exemplary embodiment, the beam of energy is X-rays.

  5. Performance limits for Synthetic Aperture Radar.

    SciTech Connect

    Doerry, Armin Walter

    2006-02-01

    The performance of a Synthetic Aperture Radar (SAR) system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to ''get your arms around'' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics, no matter how bright the engineer tasked to generate a system design. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall SAR system. For example, there are definite optimum frequency bands that depend on weather conditions and range, and minimum radar PRF for a fixed real antenna aperture dimension is independent of frequency. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the ''seek time''.

  6. Complex synthetic aperture radar data compression

    NASA Astrophysics Data System (ADS)

    Cirillo, Francis R.; Poehler, Paul L.; Schwartz, Debra S.; Rais, Houra

    2002-08-01

    Existing compression algorithms, primarily designed for visible electro-optical (EO) imagery, do not work well for Synthetic Aperture Radar (SAR) data. The best compression ratios achieved to date are less than 10:1 with minimal degradation to the phase data. Previously, phase data has been discarded with only magnitude data saved for analysis. Now that the importance of phase has been recognized for Interferometric Synthetic Aperture Radar (IFSAR), Coherent Change Detection (CCD), and polarimetry, requirements exist to preserve, transmit, and archive the both components. Bandwidth and storage limitations on existing and future platforms make compression of this data a top priority. This paper presents results obtained using a new compression algorithm designed specifically to compress SAR imagery, while preserving both magnitude and phase information at compression ratios of 20:1 and better.

  7. CRTF Real-Time Aperture Flux system

    SciTech Connect

    Davis, D.B.

    1980-01-01

    The Real-Time Aperture Flux system (TRAF) is a test measurement system designed to determine the input power/unit area (flux density) during solar experiments conducted at the Central Receiver Test Facility, Sandia National Laboratories, Albuquerque, New Mexico. The RTAF is capable of using both thermal sensors and photon sensors to determine the flux densities in the RTAF measuring plane. These data are manipulated in various ways to derive input power and flux density distribution to solar experiments.

  8. Aperture modulated, translating bed total body irradiation

    SciTech Connect

    Hussain, Amjad; Villarreal-Barajas, Jose Eduardo; Dunscombe, Peter; Brown, Derek W.

    2011-02-15

    Purpose: Total body irradiation (TBI) techniques aim to deliver a uniform radiation dose to a patient with an irregular body contour and a heterogeneous density distribution to within {+-}10% of the prescribed dose. In the current article, the authors present a novel, aperture modulated, translating bed TBI (AMTBI) technique that produces a high degree of dose uniformity throughout the entire patient. Methods: The radiation beam is dynamically shaped in two dimensions using a multileaf collimator (MLC). The irregular surface compensation algorithm in the Eclipse treatment planning system is used for fluence optimization, which is performed based on penetration depth and internal inhomogeneities. Two optimal fluence maps (AP and PA) are generated and beam apertures are created to deliver these optimal fluences. During treatment, the patient/phantom is translated on a motorized bed close to the floor (source to bed distance: 204.5 cm) under a stationary radiation beam with 0 deg. gantry angle. The bed motion and dynamic beam apertures are synchronized. Results: The AMTBI technique produces a more homogeneous dose distribution than fixed open beam translating bed TBI. In phantom studies, the dose deviation along the midline is reduced from 10% to less than 5% of the prescribed dose in the longitudinal direction. Dose to the lung is reduced by more than 15% compared to the unshielded fixed open beam technique. At the lateral body edges, the dose received from the open beam technique was 20% higher than that prescribed at umbilicus midplane. With AMTBI the dose deviation in this same region is reduced to less than 3% of the prescribed dose. Validation of the technique was performed using thermoluminescent dosimeters in a Rando phantom. Agreement between calculation and measurement was better than 3% in all cases. Conclusions: A novel, translating bed, aperture modulated TBI technique that employs dynamically shaped MLC defined beams is shown to improve dose uniformity

  9. Variable-Aperture Reciprocating Reed Valve

    NASA Technical Reports Server (NTRS)

    Lindner, Jeffrey L. (Inventor); Myers, W. Neill (Inventor); Kelley, Anthony R. (Inventor); Yang, Hong Q. (Inventor)

    2015-01-01

    A variable-aperture reciprocating reed valve includes a valve body defining a through hole region having a contoured-profile portion. A semi-rigid plate is affixed on one side thereof to the valve body to define a cantilever extending across the through hole region. At least one free edge of the cantilever opposes the contoured-profile portion of the through hole region in a non-contact relationship.

  10. Effective wavelength scaling of rectangular aperture antennas.

    PubMed

    Chen, Yuanyuan; Yu, Li; Zhang, Jiasen; Gordon, Reuven

    2015-04-20

    We investigate the resonances of aperture antennas from the visible to the terahertz regime, with comparison to comprehensive simulations. Simple piecewise analytic behavior is found for the wavelength scaling over the entire spectrum, with a linear regime through the visible and near-IR. This theory will serve as a useful and simple design tool for applications including biosensors, nonlinear plasmonics and surface enhanced spectroscopies. PMID:25969079

  11. Exploiting Decorrelations In Synthetic-Aperture Radar

    NASA Technical Reports Server (NTRS)

    Zebker, Howard A.; Villasenor, John D.

    1994-01-01

    Temporal decorrelation between synthetic-aperture-radar data acquired on subsequent passes along same or nearly same trajectory serves as measure of change in target scene. Based partly on mathematical models of statistics of correlations between first- and second-pass radar echoes. Also based partly on Fourier-transform relations between radar-system impulse response and decorrelation functions particularly those expressing decorrelation effects of rotation and horizontal shift of trajectories between two passes.

  12. The radiation from apertures in curved surfaces

    NASA Technical Reports Server (NTRS)

    Pathak, P. H.; Kouyoumjian, R. G.

    1973-01-01

    The geometrical theory of diffraction is extended to treat the radiation from apertures or slots in convex, perfectly-conducting surfaces. It is assumed that the tangential electric field in the aperture is known so that an equivalent, infinitesimal source can be defined at each point in the aperture. Surface rays emanate from this source which is a caustic of the ray system. A launching coefficient is introduced to describe the excitation of the surface ray modes. If the field radiated from the surface is desired, the ordinary diffraction coefficients are used to determine the field of the rays shed tangentially from the surface rays. The field of the surface ray modes is not the field on the surface; hence if the mutual coupling between slots is of interest, a second coefficient related to the launching coefficient must be employed. In the region adjacent to the shadow boundary, the component of the field directly radiated from the source is presented by Fock-type functions. In the illuminated region the incident radiation from the source (this does not include the diffracted field components) is treated by geometrical optics. This extension of the geometrical theory of diffraction is applied to calculate the radiation from slots on elliptic cylinders, spheres and spheroids.

  13. Restoring Aperture Profile At Sample Plane

    SciTech Connect

    Jackson, J L; Hackel, R P; Lungershausen, A W

    2003-08-03

    Off-line conditioning of full-size optics for the National Ignition Facility required a beam delivery system to allow conditioning lasers to rapidly raster scan samples while achieving several technical goals. The main purpose of the optical system designed was to reconstruct at the sample plane the flat beam profile found at the laser aperture with significant reductions in beam wander to improve scan times. Another design goal was the ability to vary the beam size at the sample to scan at different fluences while utilizing all of the laser power and minimizing processing time. An optical solution was developed using commercial off-the-shelf lenses. The system incorporates a six meter relay telescope and two sets of focusing optics. The spacing of the focusing optics is changed to allow the fluence on the sample to vary from 2 to 14 Joules per square centimeter in discrete steps. More importantly, these optics use the special properties of image relaying to image the aperture plane onto the sample to form a pupil relay with a beam profile corresponding almost exactly to the flat profile found at the aperture. A flat beam profile speeds scanning by providing a uniform intensity across a larger area on the sample. The relayed pupil plane is more stable with regards to jitter and beam wander. Image relaying also reduces other perturbations from diffraction, scatter, and focus conditions. Image relaying, laser conditioning, and the optical system designed to accomplish the stated goals are discussed.

  14. Synthetic aperture radar processing with tiered subapertures

    SciTech Connect

    Doerry, A.W.

    1994-06-01

    Synthetic Aperture Radar (SAR) is used to form images that are maps of radar reflectivity of some scene of interest, from range soundings taken over some spatial aperture. Additionally, the range soundings are typically synthesized from a sampled frequency aperture. Efficient processing of the collected data necessitates using efficient digital signal processing techniques such as vector multiplies and fast implementations of the Discrete Fourier Transform. Inherent in image formation algorithms that use these is a trade-off between the size of the scene that can be acceptably imaged, and the resolution with which the image can be made. These limits arise from migration errors and spatially variant phase errors, and different algorithms mitigate these to varying degrees. Two fairly successful algorithms for airborne SARs are Polar Format processing, and Overlapped Subaperture (OSA) processing. This report introduces and summarizes the analysis of generalized Tiered Subaperture (TSA) techniques that are a superset of both Polar Format processing and OSA processing. It is shown how tiers of subapertures in both azimuth and range can effectively mitigate both migration errors and spatially variant phase errors to allow virtually arbitrary scene sizes, even in a dynamic motion environment.

  15. Biomineral repair of abalone shell apertures.

    PubMed

    Cusack, Maggie; Guo, Dujiao; Chung, Peter; Kamenos, Nicholas A

    2013-08-01

    The shell of the gastropod mollusc, abalone, is comprised of nacre with an outer prismatic layer that is composed of either calcite or aragonite or both, depending on the species. A striking characteristic of the abalone shell is the row of apertures along the dorsal margin. As the organism and shell grow, new apertures are formed and the preceding ones are filled in. Detailed investigations, using electron backscatter diffraction, of the infill in three species of abalone: Haliotis asinina, Haliotis gigantea and Haliotis rufescens reveals that, like the shell, the infill is composed mainly of nacre with an outer prismatic layer. The infill prismatic layer has identical mineralogy as the original shell prismatic layer. In H. asinina and H. gigantea, the prismatic layer of the shell and infill are made of aragonite while in H. rufescens both are composed of calcite. Abalone builds the infill material with the same high level of biological control, replicating the structure, mineralogy and crystallographic orientation as for the shell. The infill of abalone apertures presents us with insight into what is, effectively, shell repair.

  16. Sharing code.

    PubMed

    Kubilius, Jonas

    2014-01-01

    Sharing code is becoming increasingly important in the wake of Open Science. In this review I describe and compare two popular code-sharing utilities, GitHub and Open Science Framework (OSF). GitHub is a mature, industry-standard tool but lacks focus towards researchers. In comparison, OSF offers a one-stop solution for researchers but a lot of functionality is still under development. I conclude by listing alternative lesser-known tools for code and materials sharing.

  17. Arbitrary Lagrangian Eulerian Adaptive Mesh Refinement

    SciTech Connect

    Koniges, A.; Eder, D.; Masters, N.; Fisher, A.; Anderson, R.; Gunney, B.; Wang, P.; Benson, D.; Dixit, P.

    2009-09-29

    This is a simulation code involving an ALE (arbitrary Lagrangian-Eulerian) hydrocode with AMR (adaptive mesh refinement) and pluggable physics packages for material strength, heat conduction, radiation diffusion, and laser ray tracing developed a LLNL, UCSD, and Berkeley Lab. The code is an extension of the open source SAMRAI (Structured Adaptive Mesh Refinement Application Interface) code/library. The code can be used in laser facilities such as the National Ignition Facility. The code is alsi being applied to slurry flow (landslides).

  18. Imaging performance of annular apertures. II - Line spread functions

    NASA Technical Reports Server (NTRS)

    Tschunko, H. F. A.

    1978-01-01

    Line images formed by aberration-free optical systems with annular apertures are investigated in the whole range of central obstruction ratios. Annular apertures form lines images with central and side line groups. The number of lines in each line group is given by the ratio of the outer diameter of the annular aperture divided by the width of the annulus. The theoretical energy fraction of 0.889 in the central line of the image formed by an unobstructed aperture increases for centrally obstructed apertures to 0.932 for the central line group. Energy fractions for the central and side line groups are practically constant for all obstruction ratios and for each line group. The illumination of rectangular secondary apertures of various length/width ratios by apertures of various obstruction ratios is discussed.

  19. The sensitivity of synthetic aperture radiometers for remote sensing applications from space

    NASA Technical Reports Server (NTRS)

    Levine, D. M.

    1989-01-01

    Aperture synthesis offers a means of realizing the full potential of microwave remote sensing from space by helping to overcome the limitations set by antenna size. The result is a potentially lighter, more adaptable structure for applications in space. However, because the physical collecting area is reduced, the signal-to-noise ratio is reduced and may adversely affect the radiometric sensitivity. Sensitivity is an especially critical issue for measurements to be made from low earth orbit because the motion of the platform limits the integration time available for forming an image. The purpose is to develop expression for the sensitivity of remote sensing systems which use aperture synthesis. The objective is to develop basic equations general enough to be used to obtain the sensitivity of the several variations of aperture synthesis which were proposed for sensors in space. The conventional microwave imager (a scanning total power radiometer) is treated as a special case and a comparison of three synthetic aperture configurations with the conventional imager is presented.

  20. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  1. Two-Dimensional Synthetic-Aperture Radiometer

    NASA Technical Reports Server (NTRS)

    LeVine, David M.

    2010-01-01

    A two-dimensional synthetic-aperture radiometer, now undergoing development, serves as a test bed for demonstrating the potential of aperture synthesis for remote sensing of the Earth, particularly for measuring spatial distributions of soil moisture and ocean-surface salinity. The goal is to use the technology for remote sensing aboard a spacecraft in orbit, but the basic principles of design and operation are applicable to remote sensing from aboard an aircraft, and the prototype of the system under development is designed for operation aboard an aircraft. In aperture synthesis, one utilizes several small antennas in combination with a signal processing in order to obtain resolution that otherwise would require the use of an antenna with a larger aperture (and, hence, potentially more difficult to deploy in space). The principle upon which this system is based is similar to that of Earth-rotation aperture synthesis employed in radio astronomy. In this technology the coherent products (correlations) of signals from pairs of antennas are obtained at different antenna-pair spacings (baselines). The correlation for each baseline yields a sample point in a Fourier transform of the brightness-temperature map of the scene. An image of the scene itself is then reconstructed by inverting the sampled transform. The predecessor of the present two-dimensional synthetic-aperture radiometer is a one-dimensional one, named the Electrically Scanned Thinned Array Radiometer (ESTAR). Operating in the L band, the ESTAR employs aperture synthesis in the cross-track dimension only, while using a conventional antenna for resolution in the along-track dimension. The two-dimensional instrument also operates in the L band to be precise, at a frequency of 1.413 GHz in the frequency band restricted for passive use (no transmission) only. The L band was chosen because (1) the L band represents the long-wavelength end of the remote- sensing spectrum, where the problem of achieving adequate

  2. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  3. Adaptive Mesh Refinement in CTH

    SciTech Connect

    Crawford, David

    1999-05-04

    This paper reports progress on implementing a new capability of adaptive mesh refinement into the Eulerian multimaterial shock- physics code CTH. The adaptivity is block-based with refinement and unrefinement occurring in an isotropic 2:1 manner. The code is designed to run on serial, multiprocessor and massive parallel platforms. An approximate factor of three in memory and performance improvements over comparable resolution non-adaptive calculations has-been demonstrated for a number of problems.

  4. Experimental instrumentation system for the Phased Array Mirror Extendible Large Aperture (PAMELA) test program

    NASA Technical Reports Server (NTRS)

    Boykin, William H., Jr.

    1993-01-01

    Adaptive optics are used in telescopes for both viewing objects with minimum distortion and for transmitting laser beams with minimum beam divergence and dance. In order to test concepts on a smaller scale, NASA MSFC is in the process of setting up an adaptive optics test facility with precision (fraction of wavelengths) measurement equipment. The initial system under test is the adaptive optical telescope called PAMELA (Phased Array Mirror Extendible Large Aperture). Goals of this test are: assessment of test hardware specifications for PAMELA application and the determination of the sensitivities of instruments for measuring PAMELA (and other adaptive optical telescopes) imperfections; evaluation of the PAMELA system integration effort and test progress and recommended actions to enhance these activities; and development of concepts and prototypes of experimental apparatuses for PAMELA.

  5. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark.

    PubMed

    Zhang, Tiankui; Hu, Huasi; Jia, Qinggang; Zhang, Fengna; Chen, Da; Li, Zhenghong; Wu, Yuelei; Liu, Zhihua; Hu, Guang; Guo, Wei

    2012-11-01

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. "Residual watermark," which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  6. Genetic algorithms applied to reconstructing coded imaging of neutrons and analysis of residual watermark

    SciTech Connect

    Zhang Tiankui; Hu Huasi; Jia Qinggang; Zhang Fengna; Liu Zhihua; Hu Guang; Guo Wei; Chen Da; Li Zhenghong; Wu Yuelei

    2012-11-15

    Monte-Carlo simulation of neutron coded imaging based on encoding aperture for Z-pinch of large field-of-view with 5 mm radius has been investigated, and then the coded image has been obtained. Reconstruction method of source image based on genetic algorithms (GA) has been established. 'Residual watermark,' which emerges unavoidably in reconstructed image, while the peak normalization is employed in GA fitness calculation because of its statistical fluctuation amplification, has been discovered and studied. Residual watermark is primarily related to the shape and other parameters of the encoding aperture cross section. The properties and essential causes of the residual watermark were analyzed, while the identification on equivalent radius of aperture was provided. By using the equivalent radius, the reconstruction can also be accomplished without knowing the point spread function (PSF) of actual aperture. The reconstruction result is close to that by using PSF of the actual aperture.

  7. Construction of a 56 mm aperture high-field twin-aperture superconducting dipole model magnet

    SciTech Connect

    Ahlbaeck, J; Leroy, D.; Oberli, L.; Perini, D.; Salminen, J.; Savelainen, M.; Soini, J.; Spigo, G.

    1996-07-01

    A twin-aperture superconducting dipole model has been designed in collaboration with Finnish and Swedish Scientific Institutions within the framework of the LHC R and D program and has been built at CERN. Principal features of the magnet are 56 mm aperture, separate stainless steel collared coils, yoke closed after assembly at room temperature, and longitudinal prestressing of the coil ends. This paper recalls the main dipole design characteristics and presents some details of its fabrication including geometrical and mechanical measurements of the collared coil assembly.

  8. Optimization of synthetic aperture image quality

    NASA Astrophysics Data System (ADS)

    Moshavegh, Ramin; Jensen, Jonas; Villagomez-Hoyos, Carlos A.; Stuart, Matthias B.; Hemmsen, Martin Christian; Jensen, Jørgen Arendt

    2016-04-01

    Synthetic Aperture (SA) imaging produces high-quality images and velocity estimates of both slow and fast flow at high frame rates. However, grating lobe artifacts can appear both in transmission and reception. These affect the image quality and the frame rate. Therefore optimization of parameters effecting the image quality of SA is of great importance, and this paper proposes an advanced procedure for optimizing the parameters essential for acquiring an optimal image quality, while generating high resolution SA images. Optimization of the image quality is mainly performed based on measures such as F-number, number of emissions and the aperture size. They are considered to be the most contributing acquisition factors in the quality of the high resolution images in SA. Therefore, the performance of image quality is quantified in terms of full-width at half maximum (FWHM) and the cystic resolution (CTR). The results of the study showed that SA imaging with only 32 emissions and maximum sweep angle of 22 degrees yields a very good image quality compared with using 256 emissions and the full aperture size. Therefore the number of emissions and the maximum sweep angle in the SA can be optimized to reach a reasonably good performance, and to increase the frame rate by lowering the required number of emissions. All the measurements are performed using the experimental SARUS scanner connected to a λ/2-pitch transducer. A wire phantom and a tissue mimicking phantom containing anechoic cysts are scanned using the optimized parameters for the transducer. Measurements coincide with simulations.

  9. The Configurable Aperture Space Telescope (CAST)

    NASA Astrophysics Data System (ADS)

    Ennico, Kimberly; Bendek, Eduardo A.; Lynch, Dana H.; Vassigh, Kenny K.; Young, Zion

    2016-07-01

    The Configurable Aperture Space Telescope, CAST, is a concept that provides access to a UV/visible-infrared wavelength sub-arcsecond imaging platform from space, something that will be in high demand after the retirement of the astronomy workhorse, the 2.4 meter diameter Hubble Space Telescope. CAST allows building large aperture telescopes based on small, compatible and low-cost segments mounted on autonomous cube-sized satellites. The concept merges existing technology (segmented telescope architecture) with emerging technology (smartly interconnected modular spacecraft, active optics, deployable structures). Requiring identical mirror segments, CAST's optical design is a spherical primary and secondary mirror telescope with modular multi-mirror correctors placed at the system focal plane. The design enables wide fields of view, up to as much as three degrees, while maintaining aperture growth and image performance requirements. We present a point design for the CAST concept based on a 0.6 meter diameter (3 x 3 segments) growing to a 2.6 meter diameter (13 x 13 segments) primary, with a fixed Rp=13,000 and Rs=8,750 mm curvature, f/22.4 and f/5.6, respectively. Its diffraction limited design uses a two arcminute field of view corrector with a 7.4 arcsec/mm platescale, and can support a range of platescales as fine as 0.01 arcsec/mm. Our paper summarizes CAST, presents a strawman optical design and requirements for the underlying modular spacecraft, highlights design flexibilities, and illustrates applications enabled by this new method in building space observatories.

  10. Digital exploitation of synthetic aperture radar

    NASA Technical Reports Server (NTRS)

    Wagner, H. L.; Shuchman, R. A.

    1977-01-01

    A digital processing and analysis scheme for use with digitized synthetic aperture radar data was developed. Using data from a four channel system, the imagery is preprocessed using specially designed software and then analyzed using preexisting facilities originally intended for use with MSS type data. Geometric and radiometric correction may be performed if desired, as well as classification analysis, Fast Fourier transform, filtering and level slice and display functions. The system provides low cost output in real time, permitting interactive imagery analysis. System information flow diagrams as well as sample output products are shown.

  11. Lossless compression of synthetic aperture radar images

    SciTech Connect

    Ives, R.W.; Magotra, N.; Mandyam, G.D.

    1996-02-01

    Synthetic Aperture Radar (SAR) has been proven an effective sensor in a wide variety of applications. Many of these uses require transmission and/or processing of the image data in a lossless manner. With the current state of SAR technology, the amount of data contained in a single image may be massive, whether the application requires the entire complex image or magnitude data only. In either case, some type of compression may be required to losslessly transmit this data in a given bandwidth or store it in a reasonable volume. This paper provides the results of applying several lossless compression schemes to SAR imagery.

  12. Cancellation of singularities for synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Caday, Peter

    2015-01-01

    In a basic model for synthetic aperture radar (SAR) imaging, one wishes to recover a function or distribution f from line integrals over circles whose centers lie on a given curve γ. In this paper, we consider the problem of recovering the singularities (wavefront set) of f given its SAR data, and specifically whether it is possible to choose a singular f whose singularities are hidden from γ, meaning that its SAR data is smooth. We show that f 's singularities can be hidden to leading order if a certain discrete reflection map is the identity, and give examples where this is the case. Finally, numerical experiments illustrate the hiding of singularities.

  13. Synthetic aperture radar autofocus via semidefinite relaxation.

    PubMed

    Liu, Kuang-Hung; Wiesel, Ami; Munson, David C

    2013-06-01

    The autofocus problem in synthetic aperture radar imaging amounts to estimating unknown phase errors caused by unknown platform or target motion. At the heart of three state-of-the-art autofocus algorithms, namely, phase gradient autofocus, multichannel autofocus (MCA), and Fourier-domain multichannel autofocus (FMCA), is the solution of a constant modulus quadratic program (CMQP). Currently, these algorithms solve a CMQP by using an eigenvalue relaxation approach. We propose an alternative relaxation approach based on semidefinite programming, which has recently attracted considerable attention in other signal processing problems. Experimental results show that our proposed methods provide promising performance improvements for MCA and FMCA through an increase in computational complexity.

  14. Aperture Effects and Mismatch Oscillations in an Intense Electron Beam

    SciTech Connect

    Harris, J R; O'Shea, P G

    2008-05-12

    When an electron beam is apertured, the transmitted beam current is the product of the incident beam current density and the aperture area. Space charge forces generally cause an increase in incident beam current to result in an increase in incident beam spot size. Under certain circumstances, the spot size will increase faster than the current, resulting in a decrease in current extracted from the aperture. When using a gridded electron gun, this can give rise to negative transconductance. In this paper, we explore this effect in the case of an intense beam propagating in a uniform focusing channel. We show that proper placement of the aperture can decouple the current extracted from the aperture from fluctuations in the source current, and that apertures can serve to alter longitudinal space charge wave propagation by changing the relative contribution of velocity and current modulation present in the beam.

  15. Multi-mission, autonomous, synthetic aperture radar

    NASA Astrophysics Data System (ADS)

    Walls, Thomas J.; Wilson, Michael L.; Madsen, David; Jensen, Mark; Sullivan, Stephanie; Addario, Michael; Hally, Iain

    2014-05-01

    Unmanned aerial systems (UASs) have become a critical asset in current battlespaces and continue to play an increasing role for intelligence, surveillance and reconnaissance (ISR) missions. With the development of medium-to-low altitude, rapidly deployable aircraft platforms, the ISR community has seen an increasing push to develop ISR sensors and systems with real-time mission support capabilities. This paper describes recent flight demonstrations and test results of the RASAR (Real-time, Autonomous, Synthetic Aperture Radar) sensor system. RASAR is a modular, multi-band (L and X) synthetic aperture radar (SAR) imaging sensor designed for self-contained, autonomous, real-time operation with mission flexibility to support a wide range of ISR needs within the size, weight and power constraints of Group III UASs. The sensor command and control and real-time image formation processing are designed to allow integration of RASAR into a larger, multi-intelligence system of systems. The multi-intelligence architecture and a demonstration of real-time autonomous cross-cueing of a separate optical sensor will be presented.

  16. KAOS: kilo-aperture optical spectrograph

    NASA Astrophysics Data System (ADS)

    Barden, Samuel C.; Dey, Arjun; Boyle, Brian; Glazebrook, Karl

    2004-09-01

    A design is described for a potential new facility capable of taking detailed spectroscopy of millions of objects in the Universe to explore the complexity of the Universe and to answer fundamental questions relating to the equation of state of dark energy and to how the Milky Way galaxy formed. The specific design described is envisioned for implementation on the Gemini 8-meter telescopes. It utilizes a 1.5° field of view and samples that field with up to ~5000 apertures. This Kilo-Aperture Optical Spectrograph (KAOS) is mounted at prime focus with a 4-element corrector, atmospheric dispersion compensator (ADC), and an Echidna-style fiber optic positioner. The ADC doubles as a wobble plate, allowing fast guiding that cancels out the wind buffeting of the telescope. The fibers, which can be reconfigured in less than 10 minutes, feed to an array of 12 spectrographs located in the pier of the telescope. The spectrographs are capable of provided spectral resolving powers of a few thousand up to about 40,000.

  17. The SKA New Instrumentation: Aperture Arrays

    NASA Astrophysics Data System (ADS)

    van Ardenne, A.; Faulkner, A. J.; de Vaate, J. G. bij

    The radio frequency window of the Square Kilometre Array is planned to cover the wavelength regime from cm up to a few meters. For this range to be optimally covered, different antenna concepts are considered enabling many science cases. At the lowest frequency range, up to a few GHz, it is expected that multi-beam techniques will be used, increasing the effective field-of-view to a level that allows very efficient detailed and sensitive exploration of the complete sky. Although sparse narrow band phased arrays are as old as radio astronomy, multi-octave sparse and dense arrays now being considered for the SKA, requiring new low noise design, signal processing and calibration techniques. These new array techniques have already been successfully introduced as phased array feeds upgrading existing reflecting telescopes and for new telescopes to enhance the aperture efficiency as well as greatly increasing their field-of-view (van Ardenne et al., Proc IEEE 97(8):2009) by [1]. Aperture arrays use phased arrays without any additional reflectors; the phased array elements are small enough to see most of the sky intrinsically offering a large field of view.

  18. Sparse aperture mask wavefront sensor testbed results

    NASA Astrophysics Data System (ADS)

    Subedi, Hari; Zimmerman, Neil T.; Kasdin, N. Jeremy; Riggs, A. J. E.

    2016-07-01

    Coronagraphic exoplanet detection at very high contrast requires the estimation and control of low-order wave- front aberrations. At Princeton High Contrast Imaging Lab (PHCIL), we are working on a new technique that integrates a sparse-aperture mask (SAM) with a shaped pupil coronagraph (SPC) to make precise estimates of these low-order aberrations. We collect the starlight rejected from the coronagraphic image plane and interfere it using a sparse aperture mask (SAM) at the relay pupil to estimate the low-order aberrations. In our previous work we numerically demonstrated the efficacy of the technique, and proposed a method to sense and control these differential aberrations in broadband light. We also presented early testbed results in which the SAM was used to sense pointing errors. In this paper, we will briefly overview the SAM wavefront sensor technique, explain the design of the completed testbed, and report the experimental estimation results of the dominant low-order aberrations such as tip/tit, astigmatism and focus.

  19. Optical nanolithography with λ/15 resolution using bowtie aperture array

    NASA Astrophysics Data System (ADS)

    Wen, Xiaolei; Traverso, Luis M.; Srisungsitthisunti, Pornsak; Xu, Xianfan; Moon, Euclid E.

    2014-10-01

    We report optical parallel nanolithography using bowtie apertures with the help of the interferometric-spatial-phase-imaging (ISPI) technique. The ISPI system can detect and control the distance between the bowtie aperture, and photoresist with a resolution of sub-nanometer level. It overcomes the difficulties brought by the light divergence of bowtie apertures. Parallel nanolithography with feature size of 22 ± 5 nm is achieved. This technique combines high resolution, parallel throughput, and low cost, which is promising for practical applications.

  20. Detection of and compensation for blocked elements using large coherent apertures: ex vivo studies

    NASA Astrophysics Data System (ADS)

    Jakovljevic, Marko; Bottenus, Nick; Kuo, Lily; Kumar, Shalki; Dahl, Jeremy; Trahey, Gregg

    2016-04-01

    When imaging with ultrasound through the chest wall, it is not uncommon for parts of the array to get blocked by ribs, which can limit the acoustic window and significantly impede visualization of the structures of interest. With the development of large-aperture, high-element-count, 2-D arrays and their potential use in transthoracic imaging, detecting and compensating for the blocked elements is becoming increasingly important. We synthesized large coherent 2-D apertures and used them to image a point target through excised samples of canine chest wall. Blocked elements are detected based on low amplitude of their signals. As a part of compensation, blocked elements are turned off on transmit (Tx) and receive (Rx), and point-target images are created using: coherent summation of the remaining channels, compounding of intercostal apertures, and adaptive weighting of the available Tx/Rx channel-pairs to recover the desired k-space response. The adaptive compensation method also includes a phase aberration correction to ensure that the non-blocked Tx/Rx channel pairs are summed coherently. To evaluate the methods, we compare the point-spread functions (PSFs) and near-field clutter levels for the transcostal and control acquisitions. Specifically, applying k-space compensation to the sparse aperture data created from the control acquisition reduces sidelobes from -6.6 dB to -12 dB. When applied to the transcostal data in combination with phase-aberration correction, the same method reduces sidelobes only by 3 dB, likely due to significant tissue induced acoustic noise. For the transcostal acquisition, turning off blocked elements and applying uniform weighting results in maximum clutter reduction of 5 dB on average, while the PSF stays intact. Compounding reduces clutter by about 3 dB while the k-space compensation increases clutter magnitude to the non-compensated levels.

  1. Experiment in Onboard Synthetic Aperture Radar Data Processing

    NASA Technical Reports Server (NTRS)

    Holland, Matthew

    2011-01-01

    Single event upsets (SEUs) are a threat to any computing system running on hardware that has not been physically radiation hardened. In addition to mandating the use of performance-limited, hardened heritage equipment, prior techniques for dealing with the SEU problem often involved hardware-based error detection and correction (EDAC). With limited computing resources, software- based EDAC, or any more elaborate recovery methods, were often not feasible. Synthetic aperture radars (SARs), when operated in the space environment, are interesting due to their relevance to NASAs objectives, but problematic in the sense of producing prodigious amounts of raw data. Prior implementations of the SAR data processing algorithm have been too slow, too computationally intensive, and require too much application memory for onboard execution to be a realistic option when using the type of heritage processing technology described above. This standard C-language implementation of SAR data processing is distributed over many cores of a Tilera Multicore Processor, and employs novel Radiation Hardening by Software (RHBS) techniques designed to protect the component processes (one per core) and their shared application memory from the sort of SEUs expected in the space environment. The source code includes calls to Tilera APIs, and a specialized Tilera compiler is required to produce a Tilera executable. The compiled application reads input data describing the position and orientation of a radar platform, as well as its radar-burst data, over time and writes out processed data in a form that is useful for analysis of the radar observations.

  2. Synthetic aperture radar signal processing on the MPP

    NASA Technical Reports Server (NTRS)

    Ramapriyan, H. K.; Seiler, E. J.

    1987-01-01

    Satellite-borne Synthetic Aperture Radars (SAR) sense areas of several thousand square kilometers in seconds and transmit phase history signal data several tens of megabits per second. The Shuttle Imaging Radar-B (SIR-B) has a variable swath of 20 to 50 km and acquired data over 100 kms along track in about 13 seconds. With the simplification of separability of the reference function, the processing still requires considerable resources; high speed I/O, large memory and fast computation. Processing systems with regular hardware take hours to process one Seasat image and about one hour for a SIR-B image. Bringing this processing time closer to acquisition times requires an end-to-end system solution. For the purpose of demonstration, software was implemented on the present Massively Parallel Processor (MPP) configuration for processing Seasat and SIR-B data. The software takes advantage of the high processing speed offered by the MPP, the large Staging Buffer, and the high speed I/O between the MPP array unit and the Staging Buffer. It was found that with unoptimized Parallel Pascal code, the processing time on the MPP for a 4096 x 4096 sample subset of signal data ranges between 18 and 30.2 seconds depending on options.

  3. Synthetic aperture radar signal processing on the MPP

    NASA Astrophysics Data System (ADS)

    Ramapriyan, H. K.; Seiler, E. J.

    1987-07-01

    Satellite-borne Synthetic Aperture Radars (SAR) sense areas of several thousand square kilometers in seconds and transmit phase history signal data several tens of megabits per second. The Shuttle Imaging Radar-B (SIR-B) has a variable swath of 20 to 50 km and acquired data over 100 kms along track in about 13 seconds. With the simplification of separability of the reference function, the processing still requires considerable resources; high speed I/O, large memory and fast computation. Processing systems with regular hardware take hours to process one Seasat image and about one hour for a SIR-B image. Bringing this processing time closer to acquisition times requires an end-to-end system solution. For the purpose of demonstration, software was implemented on the present Massively Parallel Processor (MPP) configuration for processing Seasat and SIR-B data. The software takes advantage of the high processing speed offered by the MPP, the large Staging Buffer, and the high speed I/O between the MPP array unit and the Staging Buffer. It was found that with unoptimized Parallel Pascal code, the processing time on the MPP for a 4096 x 4096 sample subset of signal data ranges between 18 and 30.2 seconds depending on options.

  4. Measuring spatial coherence by using a mask with multiple apertures

    NASA Astrophysics Data System (ADS)

    Mejía, Yobani; González, Aura Inés

    2007-05-01

    A simple method to measure the complex degree of spatial coherence of a partially coherent quasi-monochromatic light field is presented. The Fourier spectrum of the far-field interferogram generated by a mask with multiple apertures (small circular holes) is analyzed in terms of classes of aperture pairs. A class of aperture pairs is defined as the set of aperture pairs with the same separation vector. The height of the peaks in the magnitude spectrum determines the modulus of the complex degree of spatial coherence and the corresponding value in the phase spectrum determines the phase of the complex degree of spatial coherence. The method is illustrated with experimental results.

  5. Functionalized apertures for the detection of chemical and biological materials

    DOEpatents

    Letant, Sonia E.; van Buuren, Anthony W.; Terminello, Louis J.; Thelen, Michael P.; Hope-Weeks, Louisa J.; Hart, Bradley R.

    2010-12-14

    Disclosed are nanometer to micron scale functionalized apertures constructed on a substrate made of glass, carbon, semiconductors or polymeric materials that allow for the real time detection of biological materials or chemical moieties. Many apertures can exist on one substrate allowing for the simultaneous detection of numerous chemical and biological molecules. One embodiment features a macrocyclic ring attached to cross-linkers, wherein the macrocyclic ring has a biological or chemical probe extending through the aperture. Another embodiment achieves functionalization by attaching chemical or biological anchors directly to the walls of the apertures via cross-linkers.

  6. Multiple aperture window and seeker concepts for endo KEW applications

    SciTech Connect

    Shui, V.H.; Reeves, B.L.; Thyson, N.A.; Mueffelmann, W.H.; Werner, J.S.; Jones, G. Loral Infrared and Imaging Systems, Lexington, MA U.S. Army, Strategic Defense Command, Huntsville, AL )

    1992-05-01

    Hypersonic interceptors performing endoatmospheric hit-to-kill missions require very high seeker angle measurement accuracies in very severe aero-thermal environments. Wall jet window/aperture cooling usually leads to significant aero-optic degradation in seeker and hence interceptor performance. This paper describes window/aperture concepts that have the potential of eliminating or significantly reducing the need for coolant injection, together with a multiple aperture sensor concept that can provide a high angle measurement accuracy and a large field of regard, with a small aperture size. 15 refs.

  7. Edge equilibrium code for tokamaks

    SciTech Connect

    Li, Xujing; Drozdov, Vladimir V.

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  8. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the

  9. Target discrimination in synthetic aperture radar using artificial neural networks.

    PubMed

    Principe, J C; Kim, M; Fisher, M

    1998-01-01

    This paper addresses target discrimination in synthetic aperture radar (SAR) imagery using linear and nonlinear adaptive networks. Neural networks are extensively used for pattern classification but here the goal is discrimination. We show that the two applications require different cost functions. We start by analyzing with a pattern recognition perspective the two-parameter constant false alarm rate (CFAR) detector which is widely utilized as a target detector in SAR. Then we generalize its principle to construct the quadratic gamma discriminator (QGD), a nonparametrically trained classifier based on local image intensity. The linear processing element of the QCD is further extended with nonlinearities yielding a multilayer perceptron (MLP) which we call the NL-QGD (nonlinear QGD). MLPs are normally trained based on the L(2) norm. We experimentally show that the L(2) norm is not recommended to train MLPs for discriminating targets in SAR. Inspired by the Neyman-Pearson criterion, we create a cost function based on a mixed norm to weight the false alarms and the missed detections differently. Mixed norms can easily be incorporated into the backpropagation algorithm, and lead to better performance. Several other norms (L(8), cross-entropy) are applied to train the NL-QGD and all outperformed the L(2) norm when validated by receiver operating characteristics (ROC) curves. The data sets are constructed from TABILS 24 ISAR targets embedded in 7 km(2) of SAR imagery (MIT/LL mission 90).

  10. Improved terahertz imaging with a sparse synthetic aperture array

    NASA Astrophysics Data System (ADS)

    Zhang, Zhuopeng; Buma, Takashi

    2010-02-01

    Sparse arrays are highly attractive for implementing two-dimensional arrays, but come at the cost of degraded image quality. We demonstrate significantly improved performance by exploiting the coherent ultrawideband nature of singlecycle THz pulses. We compute two weighting factors to each time-delayed signal before final summation to form the reconstructed image. The first factor employs cross-correlation analysis to measure the degree of walk-off between timedelayed signals of neighboring elements. The second factor measures the spatial coherence of the time-delayed delayed signals. Synthetic aperture imaging experiments are performed with a THz time-domain system employing a mechanically scanned single transceiver element. Cross-sectional imaging of wire targets is performed with a onedimensional sparse array with an inter-element spacing of 1.36 mm (over four λ at 1 THz). The proposed image reconstruction technique improves image contrast by 15 dB, which is impressive considering the relatively few elements in the array. En-face imaging of a razor blade is also demonstrated with a 56 x 56 element two-dimensional array, showing reduced image artifacts with adaptive reconstruction. These encouraging results suggest that the proposed image reconstruction technique can be highly beneficial to the development of large area two-dimensional THz arrays.

  11. Finite-aperture tapered unstable resonator lasers

    NASA Astrophysics Data System (ADS)

    Bedford, Robert George

    The development of high power, high brightness semiconductor lasers is important for applications such as efficient pumping of fiber amplifiers and free space communication. The ability to couple directly into the core of a single-mode fiber can vastly increase the absorption of pump light. Further, the high mode-selectivity provided by unstable resonators accommodates single-mode operation to many times the threshold current level. The objective of this dissertation is to investigate a more efficient semiconductor-based unstable resonator design. The tapered unstable resonator laser consists of a single-mode ridge coupled to a tapered gain region. The ridge, aided by spoiling grooves, provides essential preparation of the fundamental mode, while the taper provides significant amplification and a large output mode. It is shown a laterally finite taper-side mirror (making the laser a "finite-aperture tapered unstable resonator laser") serves to significantly improve differential quantum efficiency. This results in the possibility for higher optical powers while still maintaining single-mode operation. Additionally, the advent of a detuned second order grating allows for a low divergent, quasicircular output beam emitted from the semiconductor surface, easing packaging tolerances, and making two dimensional integrated arrays possible. In this dissertation, theory, design, fabrication, and characterization are presented. Material theory is introduced, reviewing gain, carrier, and temperature effects on field propagation. Coupled-mode and coupled wave theory is reviewed to allow simulation of the passive grating. A numerical model is used to investigate laser design and optimization, and effects of finite-apertures are explored. A microfabrication method is introduced to create the FATURL in InAlGaAs/-InGaAsP/InP material emitting at about 1410 nm. Fabrication consists of photolithography, electron-beam lithography, wet etch and dry etching processes, metal and

  12. Coded source neutron imaging at the PULSTAR reactor

    SciTech Connect

    Xiao, Ziyu; Mishra, Kaushal; Hawari, Ayman; Bingham, Philip R; Bilheux, Hassina Z; Tobin Jr, Kenneth William

    2011-01-01

    A neutron imaging facility is located on beam-tube No.5 of the 1-MW PULSTAR reactor at North Carolina State University. An investigation of high resolution imaging using the coded source imaging technique has been initiated at the facility. Coded imaging uses a mosaic of pinholes to encode an aperture, thus generating an encoded image of the object at the detector. To reconstruct the image data received by the detector, the corresponding decoding patterns are used. The optimized design of coded mask is critical for the performance of this technique and will depend on the characteristics of the imaging beam. In this work, a 34 x 38 uniformly redundant array (URA) coded aperture system is studied for application at the PULSTAR reactor neutron imaging facility. The URA pattern was fabricated on a 500 ?m gadolinium sheet. Simulations and experiments with a pinhole object have been conducted using the Gd URA and the optimized beam line.

  13. Shutter/aperture settings for aerial photography

    NASA Technical Reports Server (NTRS)

    Lockwood, H. E.; Perry, L.

    1976-01-01

    Determination of aerial camera shutter and aperture settings to produce consistently high-quality aerial photographs is a task complicated by numerous variables. Presented in this article are brief discussions of each variable and specific data which may be used for the systematic control of each. The variables discussed include sunlight, aircraft altitude, subject and season, film speed, and optical system. Data which may be used as a base reference are included, and encompass two sets of sensitometric specifications for two film-chemistry processes along with camera-aircraft parameters, which have been established and used to produce good exposures. Information contained here may be used to design and implement an exposure-determination system for aerial photography.

  14. Very high numerical aperture light transmitting device

    DOEpatents

    Allison, Stephen W.; Boatner, Lynn A.; Sales, Brian C.

    1998-01-01

    A new light-transmitting device using a SCIN glass core and a novel calcium sodium cladding has been developed. The very high index of refraction, radiation hardness, similar solubility for rare earths and similar melt and viscosity characteristics of core and cladding materials makes them attractive for several applications such as high-numerical-aperture optical fibers and specialty lenses. Optical fibers up to 60 m in length have been drawn, and several simple lenses have been designed, ground, and polished. Preliminary results on the ability to directly cast optical components of lead-indium phosphate glass are also discussed as well as the suitability of these glasses as a host medium for rare-earth ion lasers and amplifiers.

  15. High numerical aperture multilayer Laue lenses

    SciTech Connect

    Morgan, Andrew J.; Prasciolu, Mauro; Andrejczuk, Andrzej; Krzywinski, Jacek; Meents, Alke; Pennicard, David; Graafsma, Heinz; Barty, Anton; Bean, Richard J.; Barthelmess, Miriam; Oberthuer, Dominik; Yefanov, Oleksandr; Aquila, Andrew; Chapman, Henry N.; Bajt, Saša

    2015-06-01

    The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along the lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution.

  16. Analysis of synthetic aperture radar imagery

    NASA Technical Reports Server (NTRS)

    Blanchard, B. J.

    1977-01-01

    Some problems faced in applications of radar measurements in hydrology are: (1) adequate calibration of the radar systems and direct digital data will be required in order that repeatable data can be acquired for hydrologic applications; (2) quantitative hydrologic research on a large scale will be prohibitive with aircraft mounted synthetic aperture radar systems due to the system geometry; (3) spacecraft platforms appear to be the best platforms for radar systems when conducting research over watersheds larger than a few square kilometers; (4) experimental radar systems should be designed to avoid use of radomes; and (5) cross polarized X and L band data seem to discriminate between good and poor hydrologic cover better than like polarized data.

  17. Aperture-synthesis interferometry at optical wavelengths

    NASA Technical Reports Server (NTRS)

    Burke, Bernard F.

    1987-01-01

    The prospects for applying aperture-synthesis interferometry to the optical domain are reviewed. The radio examples such as the VLA provide a model, since the concepts are equally valid for radio and optical wavelengths. If scientific problems at the milliarc-second resolution level (or better) are to be addressed, a space-based optical array seems to be the only practical alternative, for the same reasons that dictated array development at radio wavelengths. One concept is examined, and speculations are offered concerning the prospects for developing real systems. Phase-coherence is strongly desired for a practical array, although self-calibration and phase-closure techniques allow one to relax the restriction on absolute phase stability. The design of an array must be guided by the scientific problems to be addressed.

  18. Automated change detection for synthetic aperture sonar

    NASA Astrophysics Data System (ADS)

    G-Michael, Tesfaye; Marchand, Bradley; Tucker, J. D.; Sternlicht, Daniel D.; Marston, Timothy M.; Azimi-Sadjadi, Mahmood R.

    2014-05-01

    In this paper, an automated change detection technique is presented that compares new and historical seafloor images created with sidescan synthetic aperture sonar (SAS) for changes occurring over time. The method consists of a four stage process: a coarse navigational alignment; fine-scale co-registration using the scale invariant feature transform (SIFT) algorithm to match features between overlapping images; sub-pixel co-registration to improves phase coherence; and finally, change detection utilizing canonical correlation analysis (CCA). The method was tested using data collected with a high-frequency SAS in a sandy shallow-water environment. By using precise co-registration tools and change detection algorithms, it is shown that the coherent nature of the SAS data can be exploited and utilized in this environment over time scales ranging from hours through several days.

  19. Common aperture multispectral sensor flight test program

    SciTech Connect

    Bird, R.S.; Kaufman, C.S.

    1996-11-01

    This paper will provide an overview of the Common Aperture Multispectral Sensor (CAMS) Hardware Demonstrator. CAMS is a linescanning sensor that simultaneously collected digital imagery over the Far-IR (8 to 12 {mu}m) and visible spectral (0.55 to 1.1 PM) spectral bands, correlated at the pixel level. CAMS was initially sponsored by the U.S. Naval Air System Commands F/A-18 program office (PMA-265). The current CAMS field tests are under the direction of Northrop-Grumman for the Defense Nuclear Agency (DNA) in support of the Follow-On Open Skies Sensor Evaluation Program (FOSEP) and are scheduled to be conducted in April 1996. 8 figs., 4 tabs.

  20. Optical aperture synthesis with electronically connected telescopes.

    PubMed

    Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D

    2015-04-16

    Highest resolution imaging in astronomy is achieved by interferometry, connecting telescopes over increasingly longer distances and at successively shorter wavelengths. Here, we present the first diffraction-limited images in visual light, produced by an array of independent optical telescopes, connected electronically only, with no optical links between them. With an array of small telescopes, second-order optical coherence of the sources is measured through intensity interferometry over 180 baselines between pairs of telescopes, and two-dimensional images reconstructed. The technique aims at diffraction-limited optical aperture synthesis over kilometre-long baselines to reach resolutions showing details on stellar surfaces and perhaps even the silhouettes of transiting exoplanets. Intensity interferometry circumvents problems of atmospheric turbulence that constrain ordinary interferometry. Since the electronic signal can be copied, many baselines can be built up between dispersed telescopes, and over long distances. Using arrays of air Cherenkov telescopes, this should enable the optical equivalent of interferometric arrays currently operating at radio wavelengths.

  1. Distributed-aperture infrared sensor systems

    NASA Astrophysics Data System (ADS)

    Brusgard, Thomas C.

    1999-07-01

    The on-going maturation of electro-optic technology in which the advent of third generation focal plane array is being combined with the capabilities of increasingly powerful signal processing algorithm now points to a new direction in design of electro-optic sensor system for both military and non-military applications. Taking advantage of those advances. Distributed Aperture IR Sensor systems (DAIRS) are currently in development within the Defense Department for installation in a variety of platforms for utilization in a wide variety of tactical scenarios. DAIRS employs multiple fixed identical sensor to obtain the functionality that was previously obtained using specialized sensors for each function. In its role in tactical scenarios. DAIRS employs multiple fixed identical sensor to obtain the functionality that was previously obtained using specialized sensor for each function. In its role in tactical aircraft, DAIRS uses an array of six strategically located sensors which provide 4(pi) steradian sensor coverage, i.e., full sphere situational awareness (SA), to the aircrew. That awareness provides: missile threat warning, IR Search and Track, battle damage assessment, targeting assistance, and pilotage. DAIRS has applicability in providing expanded SA for surface ships, armored land vehicles and unmanned air combat vehicles. A typical sensor design has less than twenty-five percent of the weight, volume, and electrical power demand of current federated airborne IR sensor system and can become operational with a significant reduction in lifetime system cost. DAIRS, when combined with autocueing, may have a significant role in technological advancement of aircraft proximity warning system for in-flight collision avoidance. DAIRS is currently founded in part by the Office of Naval Research which will result in the IR Distributed Aperture System (MIDAS), which is funded as a Navy Advanced Technology Demonstration, the DAIRS will undergo airborne testing using four

  2. High-Aperture-Efficiency Horn Antenna

    NASA Technical Reports Server (NTRS)

    Pickens, Wesley; Hoppe, Daniel; Epp, Larry; Kahn, Abdur

    2005-01-01

    A horn antenna (see Figure 1) has been developed to satisfy requirements specific to its use as an essential component of a high-efficiency Ka-band amplifier: The combination of the horn antenna and an associated microstrip-patch antenna array is required to function as a spatial power divider that feeds 25 monolithic microwave integrated-circuit (MMIC) power amplifiers. The foregoing requirement translates to, among other things, a further requirement that the horn produce a uniform, vertically polarized electromagnetic field in its patches identically so that the MMICs can operate at maximum efficiency. The horn is fed from a square waveguide of 5.9436-mm-square cross section via a transition piece. The horn features cosine-tapered, dielectric-filled longitudinal corrugations in its vertical walls to create a hard boundary condition: This aspect of the horn design causes the field in the horn aperture to be substantially vertically polarized and to be nearly uniform in amplitude and phase. As used here, cosine-tapered signifies that the depth of the corrugations is a cosine function of distance along the horn. Preliminary results of finite-element simulations of performance have shown that by virtue of the cosine taper the impedance response of this horn can be expected to be better than has been achieved previously in a similar horn having linearly tapered dielectric- filled longitudinal corrugations. It is possible to create a hard boundary condition by use of a single dielectric-filled corrugation in each affected wall, but better results can be obtained with more corrugations. Simulations were performed for a one- and a three-corrugation cosine-taper design. For comparison, a simulation was also performed for a linear- taper design (see Figure 2). The three-corrugation design was chosen to minimize the cost of fabrication while still affording acceptably high performance. Future designs using more corrugations per wavelength are expected to provide better

  3. Large aperture nanocomposite deformable mirror technology

    NASA Astrophysics Data System (ADS)

    Chen, Peter C.; Hale, Richard D.

    2007-12-01

    We report progress in the development of deformable mirrors (DM) using nanocomposite materials. For the extremely large telescopes (ELTs) currently being planned, a new generation of DMs with unprecedented performance is a critical path item. The DMs need to have large apertures (meters), continuous surfaces, and low microroughness. Most importantly, they must have excellent static optical figures and yet be sufficiently thin (1-2 mm) and flexible to function with small, low powered actuators. Carbon fiber reinforced plastics (CFRP) have the potential to fulfill these requirements. However, CFRP mirrors made using direct optical replication have encountered a number of problems. Firstly, it is difficult if not impossible for a CFRP mirror to maintain a good static optical figure if a small number of plies are used, but adding more plies to the laminate tends to make the substrate too thick and stiff. Secondly, direct optical replication requires precision mandrels, the costs of which become prohibitive at multi-meter apertures. We report development of a new approach. By using a combination of a novel support structure, selected fibers, and binding resins infused with nanoparticles, it is possible to make millimeter thick optical mirrors that can both maintain good static optical figures and yet still have the required flexibility for actuation. Development and refinement of a non-contact, deterministic process of fine figuring permits generation of accurate optical surfaces without the need for precision optical mandrels. We present data from tests that have been carried out to demonstrate these new processes. A number of flat DMs have been fabricated, as well as concave and convex DMs in spherical, parabolic, and other forms.

  4. Advanced Imaging Optics Utilizing Wavefront Coding.

    SciTech Connect

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise. Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.

  5. MCNP code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MCNP code is the major Monte Carlo coupled neutron-photon transport research tool at the Los Alamos National Laboratory, and it represents the most extensive Monte Carlo development program in the United States which is available in the public domain. The present code is the direct descendent of the original Monte Carlo work of Fermi, von Neumaum, and Ulam at Los Alamos in the 1940s. Development has continued uninterrupted since that time, and the current version of MCNP (or its predecessors) has always included state-of-the-art methods in the Monte Carlo simulation of radiation transport, basic cross section data, geometry capability, variance reduction, and estimation procedures. The authors of the present code have oriented its development toward general user application. The documentation, though extensive, is presented in a clear and simple manner with many examples, illustrations, and sample problems. In addition to providing the desired results, the output listings give a a wealth of detailed information (some optional) concerning each state of the calculation. The code system is continually updated to take advantage of advances in computer hardware and software, including interactive modes of operation, diagnostic interrupts and restarts, and a variety of graphical and video aids.

  6. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  7. Fraunhofer Diffraction Patterns from Apertures Illuminated with Nonparallel Light.

    ERIC Educational Resources Information Center

    Klingsporn, Paul E.

    1979-01-01

    Discusses several aspects of Fraunhofer diffraction patterns from apertures illuminated by diverging light. Develops a generalization to apertures of arbitrary shape which shows that the sizes of the pattern are related by a simple scale factor. Uses the Abbe theory of image formation by diffraction to discuss the intensity of illumination of the…

  8. WFPC2 Polarization Observations: Strategies, Apertures, and Calibration Plans

    NASA Astrophysics Data System (ADS)

    Biretta, John; Sparks, William

    1995-01-01

    We outline several strategies for WFPC2 polarization observations, and summarize their various advantages and disadvantages. Apertures and useful fields of view are described for various rotations of the polarizer. Two apertures are found to be problematic: POLQN18 will be relocated elsewhere on WF2, and we recommend against using POLQP15P. Finally, we summarize the Cycle 4 polarization calibration plan.

  9. Phenomenology of electromagnetic coupling: Conductors penetrating an aperture

    SciTech Connect

    Wright, D.B.; King, R.J.

    1987-06-01

    The purpose of this study was to investigate the coupling effects of penetrating conductors through free-standing apertures. This penetrating conductor and aperture arrangement are referred to as a modified aperture. A penetrating conductor is defined here to be a thin, single wire bent twice at 90 angles. The wire was inserted through a rectangular aperture in a metal wall. Vertical segments on both sides of the wall coupled energy from one region to the other. Energy was incident upon the modified aperture from what is referred to as the exterior region. The amount of coupling was measured by a D sensor on the other (interior) side of the wall. This configuration of an aperture in a metal wall was used as opposed to an aperture in a cavity in order to simplify the interpretation of resulting data. The added complexity of multiple cavity resonances was therefore eliminated. Determining the effects of penetrating conductors on aperture coupling is one of several topics being investigated as part of on-going research at Lawrence Livermore National Laboratory on the phenomenology of electromagnetic coupling. These phenomenology studies are concerned with the vulnerability of electronic systems to high intensity electromagnetic fields. The investigation is relevant to high altitude EMP (HEMP), enhanced HEMP (EHEMP), and high power microwave (HPM) coupling.

  10. Synthetic aperture design for increased SAR image rate

    DOEpatents

    Bielek, Timothy P.; Thompson, Douglas G.; Walker, Bruce C.

    2009-03-03

    High resolution SAR images of a target scene at near video rates can be produced by using overlapped, but nevertheless, full-size synthetic apertures. The SAR images, which respectively correspond to the apertures, can be analyzed in sequence to permit detection of movement in the target scene.

  11. The sonar aperture and its neural representation in bats.

    PubMed

    Heinrich, Melina; Warmbold, Alexander; Hoffmann, Susanne; Firzlaff, Uwe; Wiegrebe, Lutz

    2011-10-26

    As opposed to visual imaging, biosonar imaging of spatial object properties represents a challenge for the auditory system because its sensory epithelium is not arranged along space axes. For echolocating bats, object width is encoded by the amplitude of its echo (echo intensity) but also by the naturally covarying spread of angles of incidence from which the echoes impinge on the bat's ears (sonar aperture). It is unclear whether bats use the echo intensity and/or the sonar aperture to estimate an object's width. We addressed this question in a combined psychophysical and electrophysiological approach. In three virtual-object playback experiments, bats of the species Phyllostomus discolor had to discriminate simple reflections of their own echolocation calls differing in echo intensity, sonar aperture, or both. Discrimination performance for objects with physically correct covariation of sonar aperture and echo intensity ("object width") did not differ from discrimination performances when only the sonar aperture was varied. Thus, the bats were able to detect changes in object width in the absence of intensity cues. The psychophysical results are reflected in the responses of a population of units in the auditory midbrain and cortex that responded strongest to echoes from objects with a specific sonar aperture, regardless of variations in echo intensity. Neurometric functions obtained from cortical units encoding the sonar aperture are sufficient to explain the behavioral performance of the bats. These current data show that the sonar aperture is a behaviorally relevant and reliably encoded cue for object size in bat sonar.

  12. On the possibility of intraocular adaptive optics

    NASA Astrophysics Data System (ADS)

    Vdovin, Gleb; Loktev, Mikhail; Naumov, Alexander

    2003-04-01

    We consider the technical possibility of an adaptive contact lens and an adaptive eye lens implant based on the modal liquid crystal wavefront corrector, aimed to correct the accommodation loss and higher-order aberrations of the human eye. Our first demonstrator with 5 mm optical aperture is capable of changing the focusing power in the range of 0 to +3 diopters and can be controlled via a wireless capacitive link. These properties make the corrector potentially suitable for implantation into the human eye or for use as an adaptive contact lens. We also discuss possible feedback strategies, aimed to improve visual acuity and to achieve supernormal vision with implantable adaptive optics.

  13. Adaptation and perceptual norms

    NASA Astrophysics Data System (ADS)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  14. The Redox Code

    PubMed Central

    Jones, Dean P.

    2015-01-01

    Abstract Significance: The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O2 and H2O2 contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Recent Advances: Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Critical Issues: Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Future Directions: Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine. Antioxid. Redox Signal. 23, 734–746. PMID:25891126

  15. Adaptive array antenna for satellite cellular and direct broadcast communications

    NASA Technical Reports Server (NTRS)

    Horton, Charles R.; Abend, Kenneth

    1993-01-01

    Adaptive phased-array antennas provide cost-effective implementation of large, light weight apertures with high directivity and precise beamshape control. Adaptive self-calibration allows for relaxation of all mechanical tolerances across the aperture and electrical component tolerances, providing high performance with a low-cost, lightweight array, even in the presence of large physical distortions. Beam-shape is programmable and adaptable to changes in technical and operational requirements. Adaptive digital beam-forming eliminates uplink contention by allowing a single electronically steerable antenna to service a large number of receivers with beams which adaptively focus on one source while eliminating interference from others. A large, adaptively calibrated and fully programmable aperture can also provide precise beam shape control for power-efficient direct broadcast from space. Advanced adaptive digital beamforming technologies are described for: (1) electronic compensation of aperture distortion, (2) multiple receiver adaptive space-time processing, and (3) downlink beam-shape control. Cost considerations for space-based array applications are also discussed.

  16. Measurements of Aperture Averaging on Bit-Error-Rate

    NASA Technical Reports Server (NTRS)

    Bastin, Gary L.; Andrews, Larry C.; Phillips, Ronald L.; Nelson, Richard A.; Ferrell, Bobby A.; Borbath, Michael R.; Galus, Darren J.; Chin, Peter G.; Harris, William G.; Marin, Jose A.; Burdge, Geoffrey L.; Wayne, David; Pescatore, Robert

    2005-01-01

    We report on measurements made at the Shuttle Landing Facility (SLF) runway at Kennedy Space Center of receiver aperture averaging effects on a propagating optical Gaussian beam wave over a propagation path of 1,000 in. A commercially available instrument with both transmit and receive apertures was used to transmit a modulated laser beam operating at 1550 nm through a transmit aperture of 2.54 cm. An identical model of the same instrument was used as a receiver with a single aperture that was varied in size up to 20 cm to measure the effect of receiver aperture averaging on Bit Error Rate. Simultaneous measurements were also made with a scintillometer instrument and local weather station instruments to characterize atmospheric conditions along the propagation path during the experiments.

  17. Influence of pressure change during hydraulic tests on fracture aperture.

    PubMed

    Ji, Sung-Hoon; Koh, Yong-Kwon; Kuhlman, Kristopher L; Lee, Moo Yul; Choi, Jong Won

    2013-03-01

    In a series of field experiments, we evaluate the influence of a small water pressure change on fracture aperture during a hydraulic test. An experimental borehole is instrumented at the Korea Atomic Energy Research Institute (KAERI) Underground Research Tunnel (KURT). The target fracture for testing was found from the analyses of borehole logging and hydraulic tests. A double packer system was developed and installed in the test borehole to directly observe the aperture change due to water pressure change. Using this packer system, both aperture and flow rate are directly observed under various water pressures. Results indicate a slight change in fracture hydraulic head leads to an observable change in aperture. This suggests that aperture change should be considered when analyzing hydraulic test data from a sparsely fractured rock aquifer.

  18. Microfabricated high-bandpass foucault aperture for electron microscopy

    DOEpatents

    Glaeser, Robert; Cambie, Rossana; Jin, Jian

    2014-08-26

    A variant of the Foucault (knife-edge) aperture is disclosed that is designed to provide single-sideband (SSB) contrast at low spatial frequencies but retain conventional double-sideband (DSB) contrast at high spatial frequencies in transmission electron microscopy. The aperture includes a plate with an inner open area, a support extending from the plate at an edge of the open area, a half-circle feature mounted on the support and located at the center of the aperture open area. The radius of the half-circle portion of reciprocal space that is blocked by the aperture can be varied to suit the needs of electron microscopy investigation. The aperture is fabricated from conductive material which is preferably non-oxidizing, such as gold, for example.

  19. Characterization of fracture aperture for groundwater flow and transport

    NASA Astrophysics Data System (ADS)

    Sawada, A.; Sato, H.; Tetsu, K.; Sakamoto, K.

    2007-12-01

    This paper presents experiments and numerical analyses of flow and transport carried out on natural fractures and transparent replica of fractures. The purpose of this study was to improve the understanding of the role of heterogeneous aperture patterns on channelization of groundwater flow and dispersion in solute transport. The research proceeded as follows: First, a precision plane grinder was applied perpendicular to the fracture plane to characterize the aperture distribution on a natural fracture with 1 mm of increment size. Although both time and labor were intensive, this approach provided a detailed, three dimensional picture of the pattern of fracture aperture. This information was analyzed to provide quantitative measures for the fracture aperture distribution, including JRC (Joint Roughness Coefficient) and fracture contact area ratio. These parameters were used to develop numerical models with corresponding synthetic aperture patterns. The transparent fracture replica and numerical models were then used to study how transport is affected by the aperture spatial pattern. In the transparent replica, transmitted light intensity measured by a CCD camera was used to image channeling and dispersion due to the fracture aperture spatial pattern. The CCD image data was analyzed to obtain the quantitative fracture aperture and tracer concentration data according to Lambert-Beer's law. The experimental results were analyzed using the numerical models. Comparison of the numerical models to the transparent replica provided information about the nature of channeling and dispersion due to aperture spatial patterns. These results support to develop a methodology for defining representative fracture aperture of a simplified parallel fracture model for flow and transport in heterogeneous fractures for contaminant transport analysis.

  20. Impact on stereo-acuity of two presbyopia correction approaches: monovision and small aperture inlay

    PubMed Central

    Fernández, Enrique J.; Schwarz, Christina; Prieto, Pedro M.; Manzanera, Silvestre; Artal, Pablo

    2013-01-01

    Some of the different currently applied approaches that correct presbyopia may reduce stereovision. In this work, stereo-acuity was measured for two methods: (1) monovision and (2) small aperture inlay in one eye. When performing the experiment, a prototype of a binocular adaptive optics vision analyzer was employed. The system allowed simultaneous measurement and manipulation of the optics in both eyes of a subject. The apparatus incorporated two programmable spatial light modulators: one phase-only device using liquid crystal on silicon technology for wavefront manipulation and one intensity modulator for controlling the exit pupils. The prototype was also equipped with a stimulus generator for creating retinal disparity based on two micro-displays. The three-needle test was programmed for characterizing stereo-acuity. Subjects underwent a two-alternative forced-choice test. The following cases were tested for the stimulus placed at distance: (a) natural vision; (b) 1.5 D monovision; (c) 0.75 D monovision; (d) natural vision and small pupil; (e) 0.75 D monovision and small pupil. In all cases the standard pupil diameter was 4 mm and the small pupil diameter was 1.6 mm. The use of a small aperture significantly reduced the negative impact of monovision on stereopsis. The results of the experiment suggest that combining micro-monovision with a small aperture, which is currently being implemented as a corneal inlay, can yield values of stereoacuity close to those attained under normal binocular vision. PMID:23761846

  1. Recent Enhancements of the Phased Array Mirror Extendible Large Aperture (PAMELA) Telescope Testbed at MSFC

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Burdine, Robert (Technical Monitor)

    2001-01-01

    Recent incremental upgrades to the Phased Array Mirror Extendible Large Aperture (PAMELA) telescope testbed have enabled the demonstration of phasing (with a monochromatic source) of clusters of primary mirror segments down to the diffraction limit. PAMELA upgrades include in improved Shack-Hartmann wavefront sensor, passive viscoelastic damping treatments for the voice-coil actuators, mechanical improvement of mirror surface figures, and optical bench baffling. This report summarizes the recent PAMELA upgrades, discusses the lessons learned, and presents a status of this unique testbed for wavefront sensing and control. The Marshall Space Flight Center acquired the Phased Array Mirror Extendible Large Aperture (PAMELA) telescope in 1993 after Kaman Aerospace was unable to complete integration and testing under the limited SDIO and DARPA funding. The PAMELA is a 36-segment, half-meter aperture, adaptive telescope which utilizes a Shack-Hartmann wavefront sensor, inductive coil edge sensors, voice coil actuators, imaging CCD cameras and interferometry for figure alignment, wavefront sensing and control. MSFC originally obtained the PAMELA to supplement its research in the interactions of control systems with flexible structures. In August 1994, complete tip, tilt and piston control was successfully demonstrated using the Shack-Hartmann wavefront sensor and the inductive edge sensors.

  2. Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach using beamformed ultrasound data

    PubMed Central

    Zhang, Haichong K.; Bell, Muyinatu A. Lediju; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-01-01

    Photoacoustic (PA) imaging has been developed for various clinical and pre-clinical applications, and acquiring pre-beamformed channel data is necessary to reconstruct these images. However, accessing these pre-beamformed channel data requires custom hardware to enable parallel beamforming, and is available for a limited number of research ultrasound platforms. To broaden the impact of clinical PA imaging, our goal is to devise a new PA reconstruction approach that uses ultrasound post-beamformed radio frequency (RF) data rather than raw channel data, because this type of data is readily available in both clinical and research ultrasound systems. In our proposed Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach, post-beamformed RF data from a clinical ultrasound scanner are considered as input data for an adaptive synthetic aperture beamforming algorithm. When receive focusing is applied prior to obtaining these data, the focal point is considered as a virtual element, and synthetic aperture beamforming is implemented assuming that the photoacoustic signals are received at the virtual element. The resolution and SNR obtained with the proposed method were compared to that obtained with conventional delay-and-sum beamforming with 99.87% and 91.56% agreement, respectively. In addition, we experimentally demonstrated feasibility with a pulsed laser diode setup. Results indicate that the post-beamformed RF data from any commercially available ultrasound platform can potentially be used to create PA images. PMID:27570697

  3. Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach using beamformed ultrasound data.

    PubMed

    Zhang, Haichong K; Bell, Muyinatu A Lediju; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M

    2016-08-01

    Photoacoustic (PA) imaging has been developed for various clinical and pre-clinical applications, and acquiring pre-beamformed channel data is necessary to reconstruct these images. However, accessing these pre-beamformed channel data requires custom hardware to enable parallel beamforming, and is available for a limited number of research ultrasound platforms. To broaden the impact of clinical PA imaging, our goal is to devise a new PA reconstruction approach that uses ultrasound post-beamformed radio frequency (RF) data rather than raw channel data, because this type of data is readily available in both clinical and research ultrasound systems. In our proposed Synthetic-aperture based photoacoustic re-beamforming (SPARE) approach, post-beamformed RF data from a clinical ultrasound scanner are considered as input data for an adaptive synthetic aperture beamforming algorithm. When receive focusing is applied prior to obtaining these data, the focal point is considered as a virtual element, and synthetic aperture beamforming is implemented assuming that the photoacoustic signals are received at the virtual element. The resolution and SNR obtained with the proposed method were compared to that obtained with conventional delay-and-sum beamforming with 99.87% and 91.56% agreement, respectively. In addition, we experimentally demonstrated feasibility with a pulsed laser diode setup. Results indicate that the post-beamformed RF data from any commercially available ultrasound platform can potentially be used to create PA images. PMID:27570697

  4. Novel genetic algorithm for the multiplexed computer-generated hologram with polygonal apertures

    NASA Astrophysics Data System (ADS)

    Gillet, Jean-Numa; Sheng, Yunlong

    2002-09-01

    A novel genetic algorithm (GA) with a Lamarckian search is proposed for the design of the multiplexed computer-generated hologram (MCGH) with polygonal apertures. The Fraunhofer image of the new MCGH is computed by coherent addition of the subhologram subimages. The subimages are obtained by multiplying the fast Fourier transforms of the subhologram transmittance distributions by layout coefficients computed with the Abbe transform. The division into polygonal apertures is the same for all cells, and defines the polygonal layout of the cells. In our preceding designs of the MCGH with polygonal apertures, only the subhologram transmittances, but not the polygonal layout of the cells, were optimized with our iterative subhologram design algorithm (ISDA). In this paper, we optimize for the first time the polygonal layout of the MCGH cells with a novel GA. For fabrication by e-beam lithography, each cell is composed of a number of stripes. Each stripe is divided into some trapezoidal apertures, which can (i) take a number of different shapes and (ii) belong to a number of different subholograms. The number of possible polygonal layouts for the cells therefore is huge and equal to 2^64 = 1.85 x 10^19 in the case of a MCGH with five subholograms. Each possible layout is coded as a chromosome of bits. Our novel GA performs crossovers and mutations. However, differently from the classical GA, our new GA also uses a Lamarckian search based on a gradient descent, and rapidly determines the optimal polygonal layout for the MCGH cells.

  5. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  6. Fracture-aperture alteration induced by calcite precipitation

    NASA Astrophysics Data System (ADS)

    Jones, T.; Detwiler, R. L.

    2013-12-01

    Mineral precipitation significantly alters the transport properties of fractured rock. Chemical solubility gradients that favor precipitation induce mineral growth, which decreases the local aperture and alters preferential flow paths. Understanding the resulting development of spatial heterogeneities is necessary to predict the evolution of transport properties in the subsurface. We present experimental results that quantify the relationship between mineral precipitation and aperture alteration in a transparent analog fracture, 7.62cm x 7.62cm, with a uniform aperture of ~200 μm. Prior to flow experiments, a pump circulated a super-saturated calcite solution over the bottom glass, coating the glass surface with calcite. This method of seeding resulted in clusters of calcite crystals with large reactive surface area and provided micro-scale variability in the aperture field. A continuous flow syringe pump injected a reactive fluid into the fracture at 0.5 ml/min. The fluid was a mixture of sodium bicarbonate (NaHCO3, 0.02M) and calcium chloride (CaCl2 0.0004M) with a saturation index, Ω, of 8.51 with respect to calcite. A strobed LED panel backlit the fracture and a high-resolution CCD camera monitored changes in transmitted light intensity. Light transmission techniques provided a quantitative measurement of fracture aperture over the flow field. Results from these preliminary experiments showed growth near the inlet of the fracture, with decreasing precipitation rates in the flow direction. Over a period of two weeks, the fracture aperture decreased by 17% within the first 4mm of the inlet. Newly precipitated calcite bridged individual crystal clusters and smoothed the reacting surface. This observation is an interesting contradiction to the expectation of surface roughening induced by mineral growth. Additionally, the aperture decreased uniformly across the width of the fracture due to the initial aperture distribution. Future experiments of precipitation

  7. Diffraction aperture non-ideal behaviour of air coupled transducers array elements designed for NDT.

    PubMed

    Prego Borges, J L; Montero de Espinosa, F; Salazar, J; Garcia-Alvarez, J; Chávez, J A; Turó, A; Garcia-Hernandez, M J

    2006-12-22

    Air coupled piezoelectric ultrasonic array transducers are a novel tool that could lead to interesting advances in the area of non-contact laminar material testing using Lamb wave's propagation techniques. A key issue on the development of such transducers is their efficient coupling to air media (impedance mismatch between the piezoelectric material and air is 90 dB or more). Adaptation layers are used in order to attain good matching and avoid possible serious signal degradation. However, the introduction of these matching layers modify the transducer surface behaviour and, consequently, radiation characteristics are altered, making the usual idealization criteria (of uniform surface movement) adopted for field simulation purposes inaccurate. In our system, we have a concave linear-array transducer of 64 elements (electrically coupled by pairs) working at 0.8 MHz made of PZ27 rectangular piezoceramics (15 mm x 0.3 mm) with two matching layers made of polyurethane and porous cellulose bonded on them. Experimental measurements of the acoustic aperture of single excited array elements have shown an increment on the geometrical dimensions of its active surface. A sub-millimeter vibrometer laser scan has revealed an extension of the aperture beyond the supposed physical single array element dimensions. Non-uniform symmetric apodized velocity surface vibration amplitude profile with a concave delay contour indicates the presumed existence of travelling wave phenomena over the surface of the outer array matching layer. Also, asymptotic propagation velocities around 2500 m/s and attenuation coefficient between 15 and 20 dB/mm has been determined for the travelling waves showing clear tendencies. Further comparisons between the experimental measurements of single array element field radiation diagram and simulated equivalent aperture counterpart reveal good agreement versus the ideal (uniform displaced) rectangular aperture. For this purpose an Impulse Response Method

  8. Generalization of Prism Adaptation

    ERIC Educational Resources Information Center

    Redding, Gordon M.; Wallace, Benjamin

    2006-01-01

    Prism exposure produces 2 kinds of adaptive response. Recalibration is ordinary strategic remapping of spatially coded movement commands to rapidly reduce performance error. Realignment is the extraordinary process of transforming spatial maps to bring the origins of coordinate systems into correspondence. Realignment occurs when spatial…

  9. Video coding with dynamic background

    NASA Astrophysics Data System (ADS)

    Paul, Manoranjan; Lin, Weisi; Lau, Chiew Tong; Lee, Bu-Sung

    2013-12-01

    Motion estimation (ME) and motion compensation (MC) using variable block size, sub-pixel search, and multiple reference frames (MRFs) are the major reasons for improved coding performance of the H.264 video coding standard over other contemporary coding standards. The concept of MRFs is suitable for repetitive motion, uncovered background, non-integer pixel displacement, lighting change, etc. The requirement of index codes of the reference frames, computational time in ME & MC, and memory buffer for coded frames limits the number of reference frames used in practical applications. In typical video sequences, the previous frame is used as a reference frame with 68-92% of cases. In this article, we propose a new video coding method using a reference frame [i.e., the most common frame in scene (McFIS)] generated by dynamic background modeling. McFIS is more effective in terms of rate-distortion and computational time performance compared to the MRFs techniques. It has also inherent capability of scene change detection (SCD) for adaptive group of picture (GOP) size determination. As a result, we integrate SCD (for GOP determination) with reference frame generation. The experimental results show that the proposed coding scheme outperforms the H.264 video coding with five reference frames and the two relevant state-of-the-art algorithms by 0.5-2.0 dB with less computational time.

  10. Detection of breast microcalcifications using synthetic-aperture ultrasound

    NASA Astrophysics Data System (ADS)

    Huang, Lianjie; Labyed, Yassin; Lin, Youzuo; Zhang, Zhigang; Pohl, Jennifer; Sandoval, Daniel; Williamson, Michael

    2012-03-01

    Ultrasound could be an attractive imaging modality for detecting breast microcalcifications, but it requires significant improvement in image resolution and quality. Recently, we have used tissue-equivalent phantoms to demonstrate that synthetic-aperture ultrasound has the potential to detect small targets. In this paper, we study the in vivo imaging capability of a real-time synthetic-aperture ultrasound system for detecting breast microcalcifications. This LANL's (Los Alamos National Laboratory's) custom built synthetic-aperture ultrasound system has a maximum frame rate of 25 Hz, and is one of the very first medical devices capable of acquiring synthetic-aperture ultrasound data and forming ultrasound images in real time, making the synthetic-aperture ultrasound feasible for clinical applications. We recruit patients whose screening mammograms show breast microcalcifications, and use LANL's synthetic-aperture ultrasound system to scan the regions with microcalcifications. Our preliminary in vivo patient imaging results demonstrate that synthetic-aperture ultrasound is a promising imaging modality for detecting breast microcalcifications.

  11. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    NASA Astrophysics Data System (ADS)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  12. Eyeglass: A Very Large Aperture Diffractive Space Telescope

    SciTech Connect

    Hyde, R; Dixit, S; Weisberg, A; Rushford, M

    2002-07-29

    Eyeglass is a very large aperture (25-100 meter) space telescope consisting of two distinct spacecraft, separated in space by several kilometers. A diffractive lens provides the telescope's large aperture, and a separate, much smaller, space telescope serves as its mobile eyepiece. Use of a transmissive diffractive lens solves two basic problems associated with very large aperture space telescopes; it is inherently fieldable (lightweight and flat, hence packagable and deployable) and virtually eliminates the traditional, very tight, surface shape tolerances faced by reflecting apertures. The potential drawback to use of a diffractive primary (very narrow spectral bandwidth) is eliminated by corrective optics in the telescope's eyepiece. The Eyeglass can provide diffraction-limited imaging with either single-band, multiband, or continuous spectral coverage. Broadband diffractive telescopes have been built at LLNL and have demonstrated diffraction-limited performance over a 40% spectral bandwidth (0.48-0.72 {micro}m). As one approach to package a large aperture for launch, a foldable lens has been built and demonstrated. A 75 cm aperture diffractive lens was constructed from 6 panels of 1 m thick silica; it achieved diffraction-limited performance both before and after folding. This multiple panel, folding lens, approach is currently being scaled-up at LLNL. We are building a 5 meter aperture foldable lens, involving 72 panels of 700 {micro}m thick glass sheets, diffractively patterned to operate as coherent f/50 lens.

  13. A synthetic aperture acoustic prototype system

    NASA Astrophysics Data System (ADS)

    Luke, Robert H.; Bishop, Steven S.; Chan, Aaron M.; Gugino, Peter M.; Donzelli, Thomas P.; Soumekh, Mehrdad

    2015-05-01

    A novel quasi-monostatic system operating in a side-scan synthetic aperture acoustic (SAA) imaging mode is presented. This research project's objectives are to explore the military utility of outdoor continuous sound imaging of roadside foliage and target detection. The acoustic imaging method has several military relevant advantages such as being immune to RF jamming, superior spatial resolution as compared to 0.8-2.4 GHz ground penetrating radar (GPR), capable of standoff side and forward-looking scanning, and relatively low cost, weight and size when compared to GPR technologies. The prototype system's broadband 2-17 kHz LFM chirp transceiver is mounted on a manned all-terrain vehicle. Targets are positioned within the acoustic main beam at slant ranges of two to seven meters and on surfaces such as dirt, grass, gravel and weathered asphalt and with an intervening metallic chain link fence. Acoustic image reconstructions and signature plots result in means for literal interpretation and quantifiable analyses.

  14. Tissue harmonic synthetic aperture ultrasound imaging.

    PubMed

    Hemmsen, Martin Christian; Rasmussen, Joachim Hee; Jensen, Jørgen Arendt

    2014-10-01

    Synthetic aperture sequential beamforming (SASB) and tissue harmonic imaging (THI) are combined to improve the image quality of medical ultrasound imaging. The technique is evaluated in a comparative study against dynamic receive focusing (DRF). The objective is to investigate if SASB combined with THI improves the image quality compared to DRF-THI. The major benefit of SASB is a reduced bandwidth between the probe and processing unit. A BK Medical 2202 Ultraview ultrasound scanner was used to acquire beamformed RF data for offline evaluation. The acquisition was made interleaved between methods, and data were recorded with and without pulse inversion for tissue harmonic imaging. Data were acquired using a Sound Technology 192 element convex array transducer from both a wire phantom and a tissue mimicking phantom to investigate spatial resolution and penetration. In vivo scans were also performed for a visual comparison. The spatial resolution for SASB-THI is on average 19% better than DRI-THI, and the investigation of penetration showed equally good signal-to-noise ratio. In vivo B-mode scans were made and compared. The comparison showed that SASB-THI reduces the artifact and noise interference and improves image contrast and spatial resolution.

  15. High numerical aperture multilayer Laue lenses

    DOE PAGES

    Morgan, Andrew J.; Prasciolu, Mauro; Andrejczuk, Andrzej; Krzywinski, Jacek; Meents, Alke; Pennicard, David; Graafsma, Heinz; Barty, Anton; Bean, Richard J.; Barthelmess, Miriam; et al

    2015-06-01

    The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along themore » lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution.« less

  16. Optical aperture synthesis with electronically connected telescopes

    PubMed Central

    Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D.

    2015-01-01

    Highest resolution imaging in astronomy is achieved by interferometry, connecting telescopes over increasingly longer distances and at successively shorter wavelengths. Here, we present the first diffraction-limited images in visual light, produced by an array of independent optical telescopes, connected electronically only, with no optical links between them. With an array of small telescopes, second-order optical coherence of the sources is measured through intensity interferometry over 180 baselines between pairs of telescopes, and two-dimensional images reconstructed. The technique aims at diffraction-limited optical aperture synthesis over kilometre-long baselines to reach resolutions showing details on stellar surfaces and perhaps even the silhouettes of transiting exoplanets. Intensity interferometry circumvents problems of atmospheric turbulence that constrain ordinary interferometry. Since the electronic signal can be copied, many baselines can be built up between dispersed telescopes, and over long distances. Using arrays of air Cherenkov telescopes, this should enable the optical equivalent of interferometric arrays currently operating at radio wavelengths. PMID:25880705

  17. Synthetic aperture elastography: a GPU based approach

    NASA Astrophysics Data System (ADS)

    Verma, Prashant; Doyley, Marvin M.

    2014-03-01

    Synthetic aperture (SA) ultrasound imaging system produces highly accurate axial and lateral displacement estimates; however, low frame rates and large data volumes can hamper its clinical use. This paper describes a real-time SA imaging based ultrasound elastography system that we have recently developed to overcome this limitation. In this system, we implemented both beamforming and 2D cross-correlation echo tracking on Nvidia GTX 480 graphics processing unit (GPU). We used one thread per pixel for beamforming; whereas, one block per pixel was used for echo tracking. We compared the quality of elastograms computed with our real-time system relative to those computed using our standard single threaded elastographic imaging methodology. In all studies, we used conventional measures of image quality such as elastographic signal to noise ratio (SNRe). Specifically, SNRe of axial and lateral strain elastograms computed with real-time system were 36 dB and 23 dB, respectively, which was numerically equal to those computed with our standard approach. We achieved a frame rate of 6 frames per second using our GPU based approach for 16 transmits and kernel size of 60 × 60 pixels, which is 400 times faster than that achieved using our standard protocol.

  18. High Altitude Synthetic Aperture Imaging of Titan

    NASA Astrophysics Data System (ADS)

    West, Richard; Stiles, B.; Anderson, Y.; Boehmer, R.; Callahan, P.; Gim, Y.; Hamilton, G.; Johnson, W. T.; Kelleher, K.; Wye, L.; Zebker, H.

    2006-09-01

    The Cassini spacecraft has been conducting observations of Titan since July 2004 . Currently, 6 close flyby's have collected synthetic aperture radar (SAR) data giving image resolutions down to 300 - 500 m. About 14 additional close radar imaging passes are planned. To improve radar coverage and increase the synergy with other Cassini imaging instruments such as VIMS and ISS, the radar team has started experimenting with very high altitude SAR imaging where conditions permit. This presentation will examine the performance trade-offs, special processing issues, and science potential of these high altitude SAR observations. These data collections are distinct from the normal Titan SAR images because the range will be much larger (around 20,000 km). To acquire enough signal in these circumstances, the radar operates in the lowest bandwidth scatterometer mode while spacecraft pointing control is used to slowly pan the central beam across a small swath. Due to a lower signal to noise ratio these high altitude images are designed to average together 150-200 independent looks to see features that may lie below the noise floor. So far, three high altitude images have been acquired during Titan flyby's T12, T13, and T15. In T12 imaging was attempted from 37000 km with an effective resolution around 5 km. In T13 the Huygens Probe landing site was imaged from 11000 km with effective resolution of 1 - 2 km. In T15 the Tsegehi area was imaged from 20000 km with effective resolution of 2 - 3 km.

  19. Motion measurement for synthetic aperture radar

    SciTech Connect

    Doerry, Armin W.

    2015-01-01

    Synthetic Aperture Radar (SAR) measures radar soundings from a set of locations typically along the flight path of a radar platform vehicle. Optimal focusing requires precise knowledge of the sounding source locations in 3-D space with respect to the target scene. Even data driven focusing techniques (i.e. autofocus) requires some degree of initial fidelity in the measurements of the motion of the radar. These requirements may be quite stringent especially for fine resolution, long ranges, and low velocities. The principal instrument for measuring motion is typically an Inertial Measurement Unit (IMU), but these instruments have inherent limi ted precision and accuracy. The question is %22How good does an IMU need to be for a SAR across its performance space?%22 This report analytically relates IMU specifications to parametric requirements for SAR. - 4 - Acknowledgements Th e preparation of this report is the result of a n unfunded research and development activity . Although this report is an independent effort, it draws heavily from limited - release documentation generated under a CRADA with General Atomics - Aeronautical System, Inc. (GA - ASI), and under the Joint DoD/DOE Munitions Program Memorandum of Understanding. Sandia National Laboratories is a multi - program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of En ergy's National Nuclear Security Administration under contract AC04-94AL85000.

  20. Optical aperture synthesis with electronically connected telescopes.

    PubMed

    Dravins, Dainis; Lagadec, Tiphaine; Nuñez, Paul D

    2015-01-01

    Highest resolution imaging in astronomy is achieved by interferometry, connecting telescopes over increasingly longer distances and at successively shorter wavelengths. Here, we present the first diffraction-limited images in visual light, produced by an array of independent optical telescopes, connected electronically only, with no optical links between them. With an array of small telescopes, second-order optical coherence of the sources is measured through intensity interferometry over 180 baselines between pairs of telescopes, and two-dimensional images reconstructed. The technique aims at diffraction-limited optical aperture synthesis over kilometre-long baselines to reach resolutions showing details on stellar surfaces and perhaps even the silhouettes of transiting exoplanets. Intensity interferometry circumvents problems of atmospheric turbulence that constrain ordinary interferometry. Since the electronic signal can be copied, many baselines can be built up between dispersed telescopes, and over long distances. Using arrays of air Cherenkov telescopes, this should enable the optical equivalent of interferometric arrays currently operating at radio wavelengths. PMID:25880705

  1. Motion compensation on synthetic aperture sonar images

    NASA Astrophysics Data System (ADS)

    Heremans, R.; Acheroy, M.; Dupont, Y.

    2006-09-01

    High resolution sonars are required to detect and classify mines on the sea-bed. Synthetic aperture sonar increases the sonar cross range resolution by several orders of magnitudes while maintaining or increasing the area search rate. The resolution is however strongly dependent on the precision with which the motion errors of the platform can be estimated. The term micro-navigation is used to describe this very special requirement for sub-wavelength relative positioning of the platform. Therefore algorithms were designed to estimate those motion errors and to correct for them during the (ω, k)-reconstruction phase. To validate the quality of the motion estimation algorithms a single transmitter/multiple receiver simulator was build, allowing to generate multiple point targets with or without surge and/or sway and/or yaw motion errors. The surge motion estimation is shown on real data, which were taken during a sea trial in November of 2003 with the low frequency (12 kHz) side scan sonar (LFSS) moving on a rail positioned on the sea-bed near Marciana Marina on the Elba Island, Italy.

  2. Multistatic synthetic aperture radar image formation.

    PubMed

    Krishnan, V; Swoboda, J; Yarman, C E; Yazici, B

    2010-05-01

    In this paper, we consider a multistatic synthetic aperture radar (SAR) imaging scenario where a swarm of airborne antennas, some of which are transmitting, receiving or both, are traversing arbitrary flight trajectories and transmitting arbitrary waveforms without any form of multiplexing. The received signal at each receiving antenna may be interfered by the scattered signal due to multiple transmitters and additive thermal noise at the receiver. In this scenario, standard bistatic SAR image reconstruction algorithms result in artifacts in reconstructed images due to these interferences. In this paper, we use microlocal analysis in a statistical setting to develop a filtered-backprojection (FBP) type analytic image formation method that suppresses artifacts due to interference while preserving the location and orientation of edges of the scene in the reconstructed image. Our FBP-type algorithm exploits the second-order statistics of the target and noise to suppress the artifacts due to interference in a mean-square sense. We present numerical simulations to demonstrate the performance of our multistatic SAR image formation algorithm with the FBP-type bistatic SAR image reconstruction algorithm. While we mainly focus on radar applications, our image formation method is also applicable to other problems arising in fields such as acoustic, geophysical and medical imaging.

  3. High numerical aperture multilayer Laue lenses.

    PubMed

    Morgan, Andrew J; Prasciolu, Mauro; Andrejczuk, Andrzej; Krzywinski, Jacek; Meents, Alke; Pennicard, David; Graafsma, Heinz; Barty, Anton; Bean, Richard J; Barthelmess, Miriam; Oberthuer, Dominik; Yefanov, Oleksandr; Aquila, Andrew; Chapman, Henry N; Bajt, Saša

    2015-01-01

    The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along the lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution.

  4. High numerical aperture multilayer Laue lenses

    PubMed Central

    Morgan, Andrew J.; Prasciolu, Mauro; Andrejczuk, Andrzej; Krzywinski, Jacek; Meents, Alke; Pennicard, David; Graafsma, Heinz; Barty, Anton; Bean, Richard J.; Barthelmess, Miriam; Oberthuer, Dominik; Yefanov, Oleksandr; Aquila, Andrew; Chapman, Henry N.; Bajt, Saša

    2015-01-01

    The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along the lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution. PMID:26030003

  5. High numerical aperture multilayer Laue lenses.

    PubMed

    Morgan, Andrew J; Prasciolu, Mauro; Andrejczuk, Andrzej; Krzywinski, Jacek; Meents, Alke; Pennicard, David; Graafsma, Heinz; Barty, Anton; Bean, Richard J; Barthelmess, Miriam; Oberthuer, Dominik; Yefanov, Oleksandr; Aquila, Andrew; Chapman, Henry N; Bajt, Saša

    2015-01-01

    The ever-increasing brightness of synchrotron radiation sources demands improved X-ray optics to utilise their capability for imaging and probing biological cells, nanodevices, and functional matter on the nanometer scale with chemical sensitivity. Here we demonstrate focusing a hard X-ray beam to an 8 nm focus using a volume zone plate (also referred to as a wedged multilayer Laue lens). This lens was constructed using a new deposition technique that enabled the independent control of the angle and thickness of diffracting layers to microradian and nanometer precision, respectively. This ensured that the Bragg condition is satisfied at each point along the lens, leading to a high numerical aperture that is limited only by its extent. We developed a phase-shifting interferometric method based on ptychography to characterise the lens focus. The precision of the fabrication and characterisation demonstrated here provides the path to efficient X-ray optics for imaging at 1 nm resolution. PMID:26030003

  6. Septal aperture aetiology: still more questions than answers.

    PubMed

    Myszka, A

    2015-01-01

    Many theories have been suggested in order to explain the aetiology of septal aperture. The influence of genes, the size and shape of ulna processes, joint laxity, bone robusticity, osteoarthritis, and osteoporosis has been discussed; however, the problem has not yet been solved. The aim of the study was to examine the correlations between musculoskeletal stress markers, humeral robusticity and septal aperture. Additionally, the frequency of septal aperture according to sex, age, and skeletal side had been analysed. The skeletal material had come from a medieval cemetery in Cedynia, Poland. Skeletons of 201 adults (102 males, 99 females) had been examined and septal aperture had been scored. Six muscle attachment sites of upper limb bones had been analysed. Humeral robusticity had been calculated by use of the humeral robusticity index. The frequency of septal aperture among the population from Cedynia is 7.5%. There are no differences in septal aperture prevalence between males and females, the skeletal sides or age groups. In the analysed material, males with less developed muscle markers of right upper bones proved a higher predictable rate in having septal aperture (R = -0.34). On the left bones and among females, the converse correlation had also been found, but it is not statistically significant. The correlation between septal aperture and humeral robusticity is converse, yet small and insignificant. These results can confirm the theory of joint laxity and suggest that stronger bones (heavier muscles, more robust bones) increase joint tightness, and therefore protect the humeral lamina from septal aperture formation. But this theory needs a further detailed analysis.

  7. Overlapped Fourier coding for optical aberration removal

    PubMed Central

    Horstmeyer, Roarke; Ou, Xiaoze; Chung, Jaebum; Zheng, Guoan; Yang, Changhuei

    2014-01-01

    We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time. PMID:25321982

  8. Electromagnetic Formation Flight (EMFF) for Sparse Aperture Arrays

    NASA Technical Reports Server (NTRS)

    Kwon, Daniel W.; Miller, David W.; Sedwick, Raymond J.

    2004-01-01

    Traditional methods of actuating spacecraft in sparse aperture arrays use propellant as a reaction mass. For formation flying systems, propellant becomes a critical consumable which can be quickly exhausted while maintaining relative orientation. Additional problems posed by propellant include optical contamination, plume impingement, thermal emission, and vibration excitation. For these missions where control of relative degrees of freedom is important, we consider using a system of electromagnets, in concert with reaction wheels, to replace the consumables. Electromagnetic Formation Flight sparse apertures, powered by solar energy, are designed differently from traditional propulsion systems, which are based on V. This paper investigates the design of sparse apertures both inside and outside the Earth's gravity field.

  9. Nevada Administrative Code for Special Education Programs.

    ERIC Educational Resources Information Center

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and talented,""mental…

  10. Optimality Of Variable-Length Codes

    NASA Technical Reports Server (NTRS)

    Yeh, Pen-Shu; Miller, Warner H.; Rice, Robert F.

    1994-01-01

    Report presents analysis of performances of conceptual Rice universal noiseless coders designed to provide efficient compression of data over wide range of source-data entropies. Includes predictive preprocessor that maps source data into sequence of nonnegative integers and variable-length-coding processor, which adapts to varying entropy of source data by selecting whichever one of number of optional codes yields shortest codeword.

  11. High-contrast imaging with an arbitrary aperture: Active compensation of aperture discontinuities

    SciTech Connect

    Pueyo, Laurent; Norman, Colin

    2013-06-01

    We present a new method to achieve high-contrast images using segmented and/or on-axis telescopes. Our approach relies on using two sequential deformable mirrors (DMs) to compensate for the large amplitude excursions in the telescope aperture due to secondary support structures and/or segment gaps. In this configuration the parameter landscape of DM surfaces that yield high-contrast point-spread functions is not linear, and nonlinear methods are needed to find the true minimum in the optimization topology. We solve the highly nonlinear Monge-Ampere equation that is the fundamental equation describing the physics of phase-induced amplitude modulation. We determine the optimum configuration for our two sequential DM system and show that high-throughput and high-contrast solutions can be achieved using realistic surface deformations that are accessible using existing technologies. We name this process Active Compensation of Aperture Discontinuities (ACAD). We show that for geometries similar to the James Webb Space Telescope, ACAD can attain at least 10{sup –7} in contrast and an order of magnitude higher for both the future extremely large telescopes and on-axis architectures reminiscent of the Hubble Space Telescope. We show that the converging nonlinear mappings resulting from our DM shapes actually damp near-field diffraction artifacts in the vicinity of the discontinuities. Thus, ACAD actually lowers the chromatic ringing due to diffraction by segment gaps and struts while not amplifying the diffraction at the aperture edges beyond the Fresnel regime. This outer Fresnel ringing can be mitigated by properly designing the optical system. Consequently, ACAD is a true broadband solution to the problem of high-contrast imaging with segmented and/or on-axis apertures. We finally show that once the nonlinear solution is found, fine tuning with linear methods used in wavefront control can be applied to further contrast by another order of magnitude. Generally speaking

  12. Improvements to SOIL: An Eulerian hydrodynamics code

    SciTech Connect

    Davis, C.G.

    1988-04-01

    Possible improvements to SOIL, an Eulerian hydrodynamics code that can do coupled radiation diffusion and strength of materials, are presented in this report. Our research is based on the inspection of other Eulerian codes and theoretical reports on hydrodynamics. Several conclusions from the present study suggest that some improvements are in order, such as second-order advection, adaptive meshes, and speedup of the code by vectorization and/or multitasking. 29 refs., 2 figs.

  13. Optical Transmission Properties of Dielectric Aperture Arrays

    NASA Astrophysics Data System (ADS)

    Yang, Tao

    Optical detection devices such as optical biosensors and optical spectrometers are widely used in many applications for the functions of measurements, inspections and analysis. Due to the large dimension of prisms and gratings, the traditional optical devices normally occupy a large space with complicated components. Since cheaper and smaller optical devices are always in demand, miniaturization has been kept going for years. Thanks to recent fabrication advances, nanophotonic devices such as semiconductor laser chips have been growing in number and diversity. However, the optical biosensor chips and the optical spectrometer chips are seldom reported in the literature. For the reason of improving system integration, the study of ultra-compact, low-cost, high-performance and easy-alignment optical biosensors and optical spectrometers are imperative. This thesis is an endeavor in these two subjects and will present our research work on studying the optical transmission properties of dielectric aperture arrays and developing new optical biosensors and optical spectrometers. The first half of the thesis demonstrates that the optical phase shift associated with the surface plasmon (SP) assisted extraordinary optical transmission (EOT) in nano-hole arrays fabricated in a metal film has a strong dependence on the material refractive index value in close proximity to the holes. A novel refractive index sensor based on detecting the EOT phase shift is proposed by building a model. This device readily provides a 2-D biosensor array platform for non-labeled real-time detection of a variety of organic and biological molecules in a sensor chip format, which leads to a high packing density, minimal analyte volumes, and a large number of parallel channels while facilitating high resolution imaging and supporting a large space-bandwidth product (SBP). Simulation (FDTD Solutions, Lumerical Solutions Inc) results indicate an achievable sensitivity limit of 4.37x10-9 refractive index

  14. Triangulation using synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Wu, Sherman S. C.; Howington-Kraus, Annie E.

    1991-01-01

    For the extraction of topographic information about Venus from stereoradar images obtained from the Magellan Mission, a Synthetic Aperture Radar (SAR) compilation system was developed on analytical stereoplotters. The system software was extensively tested by using stereoradar images from various spacecraft and airborne radar systems, including Seasat, SIR-B, ERIM XCL, and STAR-1. Stereomodeling from radar images was proven feasible, and development is on a correct approach. During testing, the software was enhanced and modified to obtain more flexibility and better precision. Triangulation software for establishing control points by using SAR images was also developed through a joint effort with the Defense Mapping Agency. The SAR triangulation system comprises four main programs, TRIDATA, MODDATA, TRISAR, and SHEAR. The first two programs are used to sort and update the data; the third program, the main one, performs iterative statistical adjustment; and the fourth program analyzes the results. Also, input are flight data and data from the Global Positioning System and Inertial System (navigation information). The SAR triangulation system was tested with six strips of STAR-1 radar images on a VAX-750 computer. Each strip contains images of 10 minutes flight time (equivalent to a ground distance of 73.5 km); the images cover a ground width of 22.5 km. All images were collected from the same side. With an input of 44 primary control points, 441 ground control points were produced. The adjustment process converged after eight iterations. With a 6-m/pixel resolution of the radar images, the triangulation adjustment has an average standard elevation error of 81 m. Development of Magellan radargrammetry will be continued to convert both SAR compilation and triangulation systems into digital form.

  15. The LASS (Larger Aperture Superconducting Solenoid) spectrometer

    SciTech Connect

    Aston, D.; Awaji, N.; Barnett, B.; Bienz, T.; Bierce, R.; Bird, F.; Bird, L.; Blockus, D.; Carnegie, R.K.; Chien, C.Y.

    1986-04-01

    LASS is the acronym for the Large Aperture Superconducting Solenoid spectrometer which is located in an rf-separated hadron beam at the Stanford Linear Accelerator Center. This spectrometer was constructed in order to perform high statistics studies of multiparticle final states produced in hadron reactions. Such reactions are frequently characterized by events having complicated topologies and/or relatively high particle multiplicity. Their detailed study requires a spectrometer which can provide good resolution in momentum and position over almost the entire solid angle subtended by the production point. In addition, good final state particle identification must be available so that separation of the many kinematically-overlapping final states can be achieved. Precise analyses of the individual reaction channels require high statistics, so that the spectrometer must be capable of high data-taking rates in order that such samples can be acquired in a reasonable running time. Finally, the spectrometer must be complemented by a sophisticated off-line analysis package which efficiently finds tracks, recognizes and fits event topologies and correctly associates the available particle identification information. This, together with complicated programs which perform specific analysis tasks such as partial wave analysis, requires a great deal of software effort allied to a very large computing capacity. This paper describes the construction and performance of the LASS spectrometer, which is an attempt to realize the features just discussed. The configuration of the spectrometer corresponds to the data-taking on K and K interactions in hydrogen at 11 GeV/c which took place in 1981 and 1982. This constitutes a major upgrade of the configuration used to acquire lower statistics data on 11 GeV/c K p interactions during 1977 and 1978, which is also described briefly.

  16. Soviet oceanographic synthetic aperture radar (SAR) research

    SciTech Connect

    Held, D.N.; Gasparovic, R.F.; Mansfield, A.W.; Melville, W.K.; Mollo-Christensen, E.L.; Zebker, H.A.

    1991-01-01

    Radar non-acoustic anti-submarine warfare (NAASW) became the subject of considerable scientific investigation and controversy in the West subsequent to the discovery by the Seasat satellite in 1978 that manifestations of underwater topography, thought to be hidden from the radar, were visible in synthetic aperture radar (SAR) images of the ocean. In addition, the Seasat radar produced images of ship wakes where the observed angle between the wake arms was much smaller than expected from classical Kelvin wake theory. These observations cast doubt on the radar oceanography community's ability to adequately explain these phenomena, and by extension on the ability of existing hydrodynamic and radar scattering models to accurately predict the observability of submarine-induced signatures. If one is of the opinion that radar NAASW is indeed a potentially significant tool in detecting submerged operational submarines, then the Soviet capability, as evidenced throughout this report, will be somewhat daunting. It will be shown that the Soviets have extremely fine capabilities in both theoretical and experimental hydrodynamics, that Soviet researchers have been conducting at-sea radar remote sensing experiments on a scale comparable to those of the United States for several years longer than we have, and that they have both an airborne and spaceborne SAR capability. The only discipline that the Soviet Union appears to be lacking is in the area of digital radar signal processing. If one is of the opinion that radar NAASW can have at most a minimal impact on the detection of submerged submarines, then the Soviet effort is of little consequence and poses not threat. 280 refs., 31 figs., 12 tabs.

  17. Georeferencing on Synthetic Aperture Radar Imagery

    NASA Astrophysics Data System (ADS)

    Esmaeilzade, M.; Amini, J.; Zakeri, S.

    2015-12-01

    Due to the SAR1 geometry imaging, SAR images include geometric distortions that would be erroneous image information and the images should be geometrically calibrated. As the radar systems are side looking, geometric distortion such as shadow, foreshortening and layover are occurred. To compensate these geometric distortions, information about sensor position, imaging geometry and target altitude from ellipsoid should be available. In this paper, a method for geometric calibration of SAR images is proposed. The method uses Range-Doppler equations. In this method, for the image georeferencing, the DEM2 of SRTM with 30m pixel size is used and also exact ephemeris data of the sensor is required. In the algorithm proposed in this paper, first digital elevation model transmit to range and azimuth direction. By applying this process, errors caused by topography such as foreshortening and layover are removed in the transferred DEM. Then, the position of the corners on original image is found base on the transferred DEM. Next, original image registered to transfer DEM by 8 parameters projective transformation. The output is the georeferenced image that its geometric distortions are removed. The advantage of the method described in this article is that it does not require any control point as well as the need to attitude and rotational parameters of the sensor. Since the ground range resolution of used images are about 30m, the geocoded images using the method described in this paper have an accuracy about 20m (subpixel) in planimetry and about 30m in altimetry. 1 Synthetic Aperture Radar 2 Digital Elevation Model

  18. Effect of bandwidth and numerical aperture in optical scatterometry

    NASA Astrophysics Data System (ADS)

    Germer, Thomas A.; Patrick, Heather J.

    2010-03-01

    We consider the effects of finite spectral bandwidth and numerical aperture in scatterometry measurements and discuss efficient integration methods based upon Gaussian quadrature in one dimension (for spectral bandwidth averaging) and two dimensions inside a circle (for numerical aperture averaging). Provided the wavelength is not near a Wood's anomaly for the grating, we find that the resulting methods converge very quickly to a level suitable for most measurement applications. In the vicinity of a Wood's anomaly, however, the methods provide rather poor behavior. We also describe a method that can be used to extract the effective spectral bandwidth and numerical aperture for a scatterometry tool. We find that accounting for spectral bandwidth and numerical aperture is necessary to obtain satisfactory results in scatterometry.

  19. Incoherent signal source resolution based on coherent aperture synthesis

    NASA Astrophysics Data System (ADS)

    Zverev, V. A.

    2016-05-01

    A technique is proposed for resolving two incoherent signal sources of the same frequency and significantly different intensities with similar angular coordinates. The technique is based on aperture synthesis of a receiving array, first, by the signal of higher-power source and the estimate of its angular coordinate with subsequent subtraction of the signal spectrum from the angular spectrum of the received field. This makes it possible to achieve aperture synthesis and estimate the angle of arrival of a higher-power signal. Thus, the technique is of interest not only for synthesized apertures, but also for arrays with a filled aperture, since it eliminates the restrictions imposed by the presence of lateral lobes of the array response. Our mathematical simulation data demonstrate the efficiency of this technique in the detection and location of weak signals against the background of high-power noise sources even at their close angular positions.

  20. Ambiguity Of Doppler Centroid In Synthetic-Aperture Radar

    NASA Technical Reports Server (NTRS)

    Chang, Chi-Yung; Curlander, John C.

    1991-01-01

    Paper discusses performances of two algorithms for resolution of ambiguity in estimated Doppler centroid frequency of echoes in synthetic-aperture radar. One based on range-cross-correlation technique, other based on multiple-pulse-repetition-frequency technique.