Sample records for adaptive coded aperture

  1. Adaptive coded aperture imaging in the infrared: towards a practical implementation

    NASA Astrophysics Data System (ADS)

    Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley

    2008-08-01

    An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.

  2. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  3. Mobile, hybrid Compton/coded aperture imaging for detection, identification and localization of gamma-ray sources at stand-off distances

    NASA Astrophysics Data System (ADS)

    Tornga, Shawn R.

    The Stand-off Radiation Detection System (SORDS) program is an Advanced Technology Demonstration (ATD) project through the Department of Homeland Security's Domestic Nuclear Detection Office (DNDO) with the goal of detection, identification and localization of weak radiological sources in the presence of large dynamic backgrounds. The Raytheon-SORDS Tri-Modal Imager (TMI) is a mobile truck-based, hybrid gamma-ray imaging system able to quickly detect, identify and localize, radiation sources at standoff distances through improved sensitivity while minimizing the false alarm rate. Reconstruction of gamma-ray sources is performed using a combination of two imaging modalities; coded aperture and Compton scatter imaging. The TMI consists of 35 sodium iodide (NaI) crystals 5x5x2 in3 each, arranged in a random coded aperture mask array (CA), followed by 30 position sensitive NaI bars each 24x2.5x3 in3 called the detection array (DA). The CA array acts as both a coded aperture mask and scattering detector for Compton events. The large-area DA array acts as a collection detector for both Compton scattered events and coded aperture events. In this thesis, developed coded aperture, Compton and hybrid imaging algorithms will be described along with their performance. It will be shown that multiple imaging modalities can be fused to improve detection sensitivity over a broader energy range than either alone. Since the TMI is a moving system, peripheral data, such as a Global Positioning System (GPS) and Inertial Navigation System (INS) must also be incorporated. A method of adapting static imaging algorithms to a moving platform has been developed. Also, algorithms were developed in parallel with detector hardware, through the use of extensive simulations performed with the Geometry and Tracking Toolkit v4 (GEANT4). Simulations have been well validated against measured data. Results of image reconstruction algorithms at various speeds and distances will be presented as well as localization capability. Utilizing imaging information will show signal-to-noise gains over spectroscopic algorithms alone.

  4. Side information in coded aperture compressive spectral imaging

    NASA Astrophysics Data System (ADS)

    Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.

    2017-02-01

    Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.

  5. A Spherical Active Coded Aperture for 4π Gamma-ray Imaging

    DOE PAGES

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...

    2017-09-22

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  6. Coded diffraction system in X-ray crystallography using a boolean phase coded aperture approximation

    NASA Astrophysics Data System (ADS)

    Pinilla, Samuel; Poveda, Juan; Arguello, Henry

    2018-03-01

    Phase retrieval is a problem present in many applications such as optics, astronomical imaging, computational biology and X-ray crystallography. Recent work has shown that the phase can be better recovered when the acquisition architecture includes a coded aperture, which modulates the signal before diffraction, such that the underlying signal is recovered from coded diffraction patterns. Moreover, this type of modulation effect, before the diffraction operation, can be obtained using a phase coded aperture, just after the sample under study. However, a practical implementation of a phase coded aperture in an X-ray application is not feasible, because it is computationally modeled as a matrix with complex entries which requires changing the phase of the diffracted beams. In fact, changing the phase implies finding a material that allows to deviate the direction of an X-ray beam, which can considerably increase the implementation costs. Hence, this paper describes a low cost coded X-ray diffraction system based on block-unblock coded apertures that enables phase reconstruction. The proposed system approximates the phase coded aperture with a block-unblock coded aperture by using the detour-phase method. Moreover, the SAXS/WAXS X-ray crystallography software was used to simulate the diffraction patterns of a real crystal structure called Rhombic Dodecahedron. Additionally, several simulations were carried out to analyze the performance of block-unblock approximations in recovering the phase, using the simulated diffraction patterns. Furthermore, the quality of the reconstructions was measured in terms of the Peak Signal to Noise Ratio (PSNR). Results show that the performance of the block-unblock phase coded apertures approximation decreases at most 12.5% compared with the phase coded apertures. Moreover, the quality of the reconstructions using the boolean approximations is up to 2.5 dB of PSNR less with respect to the phase coded aperture reconstructions.

  7. Singer product apertures-A coded aperture system with a fast decoding algorithm

    NASA Astrophysics Data System (ADS)

    Byard, Kevin; Shutler, Paul M. E.

    2017-06-01

    A new type of coded aperture configuration that enables fast decoding of the coded aperture shadowgram data is presented. Based on the products of incidence vectors generated from the Singer difference sets, we call these Singer product apertures. For a range of aperture dimensions, we compare experimentally the performance of three decoding methods: standard decoding, induction decoding and direct vector decoding. In all cases the induction and direct vector methods are several orders of magnitude faster than the standard method, with direct vector decoding being significantly faster than induction decoding. For apertures of the same dimensions the increase in speed offered by direct vector decoding over induction decoding is better for lower throughput apertures.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hellfeld, Daniel; Barton, Paul; Gunter, Donald

    Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less

  9. Integrated Reconfigurable Aperture, Digital Beam Forming, and Software GPS Receiver for UAV Navigation

    DTIC Science & Technology

    2007-12-11

    Implemented both carrier and code phase tracking loop for performance evaluation of a minimum power beam forming algorithm and null steering algorithm...4 Antennal Antenna2 Antenna K RF RF RF ct, Ct~2 ChKx1 X2 ....... Xk A W ~ ~ =Z, x W ,=1 Fig. 5. Schematics of a K-element antenna array spatial...adaptive processor Antennal Antenna K A N-i V/ ( Vil= .i= VK Fig. 6. Schematics of a K-element antenna array space-time adaptive processor Two additional

  10. Spectrally Adaptable Compressive Sensing Imaging System

    DTIC Science & Technology

    2014-05-01

    signal recovering [?, ?]. The time-varying coded apertures can be implemented using micro-piezo motors [?] or through the use of Digital Micromirror ...feasibility of this testbed by developing a Digital- Micromirror -Device-based Snapshot Spectral Imaging (DMD-SSI) system, which implements CS measurement...Y. Wu, I. O. Mirza, G. R. Arce, and D. W. Prather, ”Development of a digital- micromirror - device- based multishot snapshot spectral imaging

  11. Comparison of PSF maxima and minima of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems

    NASA Astrophysics Data System (ADS)

    Ratnam, Challa; Lakshmana Rao, Vadlamudi; Lachaa Goud, Sivagouni

    2006-10-01

    In the present paper, and a series of papers to follow, the Fourier analytical properties of multiple annuli coded aperture (MACA) and complementary multiple annuli coded aperture (CMACA) systems are investigated. First, the transmission function for MACA and CMACA is derived using Fourier methods and, based on the Fresnel-Kirchoff diffraction theory, the formulae for the point spread function are formulated. The PSF maxima and minima are calculated for both the MACA and CMACA systems. The dependence of these properties on the number of zones is studied and reported in this paper.

  12. Utilizing Microelectromechanical Systems (MEMS) Micro-Shutter Designs for Adaptive Coded Aperture Imaging (ACAI) Technologies

    DTIC Science & Technology

    2009-03-01

    52 Figure 4-1: Applied voltage versus deflection curve for Poly1/Poly2 stacked 300-μm single hot-arm actuator (shown on right...58 Figure 4-2: Applied voltage versus deflection curve for Poly1/Poly2 stacked 300-μm double hot-arm actuator (shown on...61 Figure 4-5: Deflection vs. power curves for an individual wedge from

  13. Smoothing-Based Relative Navigation and Coded Aperture Imaging

    NASA Technical Reports Server (NTRS)

    Saenz-Otero, Alvar; Liebe, Carl Christian; Hunter, Roger C.; Baker, Christopher

    2017-01-01

    This project will develop an efficient smoothing software for incremental estimation of the relative poses and velocities between multiple, small spacecraft in a formation, and a small, long range depth sensor based on coded aperture imaging that is capable of identifying other spacecraft in the formation. The smoothing algorithm will obtain the maximum a posteriori estimate of the relative poses between the spacecraft by using all available sensor information in the spacecraft formation.This algorithm will be portable between different satellite platforms that possess different sensor suites and computational capabilities, and will be adaptable in the case that one or more satellites in the formation become inoperable. It will obtain a solution that will approach an exact solution, as opposed to one with linearization approximation that is typical of filtering algorithms. Thus, the algorithms developed and demonstrated as part of this program will enhance the applicability of small spacecraft to multi-platform operations, such as precisely aligned constellations and fractionated satellite systems.

  14. High-resolution imaging gamma-ray spectroscopy with externally segmented germanium detectors

    NASA Technical Reports Server (NTRS)

    Callas, J. L.; Mahoney, W. A.; Varnell, L. S.; Wheaton, W. A.

    1993-01-01

    Externally segmented germanium detectors promise a breakthrough in gamma-ray imaging capabilities while retaining the superb energy resolution of germanium spectrometers. An angular resolution of 0.2 deg becomes practical by combining position-sensitive germanium detectors having a segment thickness of a few millimeters with a one-dimensional coded aperture located about a meter from the detectors. Correspondingly higher angular resolutions are possible with larger separations between the detectors and the coded aperture. Two-dimensional images can be obtained by rotating the instrument. Although the basic concept is similar to optical or X-ray coded-aperture imaging techniques, several complicating effects arise because of the penetrating nature of gamma rays. The complications include partial transmission through the coded aperture elements, Compton scattering in the germanium detectors, and high background count rates. Extensive electron-photon Monte Carlo modeling of a realistic detector/coded-aperture/collimator system has been performed. Results show that these complicating effects can be characterized and accounted for with no significant loss in instrument sensitivity.

  15. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    NASA Astrophysics Data System (ADS)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  16. Direct simulation Monte Carlo prediction of on-orbit contaminant deposit levels for HALOE

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1994-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flow field and surface conditions and geometric orientations for the satellite in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. A detailed description of the adaptation of this solution method to the study of the satellite's environment is also presented. Results pertaining to the satellite's environment are presented regarding contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface, along with data related to code performance. Using procedures developed in standard contamination analyses, along with many worst-case assumptions, the cumulative upper-limit level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated at about 13,350 A.

  17. The electromagnetic modeling of thin apertures using the finite-difference time-domain technique

    NASA Technical Reports Server (NTRS)

    Demarest, Kenneth R.

    1987-01-01

    A technique which computes transient electromagnetic responses of narrow apertures in complex conducting scatterers was implemented as an extension of previously developed Finite-Difference Time-Domain (FDTD) computer codes. Although these apertures are narrow with respect to the wavelengths contained within the power spectrum of excitation, this technique does not require significantly more computer resources to attain the increased resolution at the apertures. In the report, an analytical technique which utilizes Babinet's principle to model the apertures is developed, and an FDTD computer code which utilizes this technique is described.

  18. A novel three-dimensional image reconstruction method for near-field coded aperture single photon emission computerized tomography

    PubMed Central

    Mu, Zhiping; Hong, Baoming; Li, Shimin; Liu, Yi-Hwa

    2009-01-01

    Coded aperture imaging for two-dimensional (2D) planar objects has been investigated extensively in the past, whereas little success has been achieved in imaging 3D objects using this technique. In this article, the authors present a novel method of 3D single photon emission computerized tomography (SPECT) reconstruction for near-field coded aperture imaging. Multiangular coded aperture projections are acquired and a stack of 2D images is reconstructed separately from each of the projections. Secondary projections are subsequently generated from the reconstructed image stacks based on the geometry of parallel-hole collimation and the variable magnification of near-field coded aperture imaging. Sinograms of cross-sectional slices of 3D objects are assembled from the secondary projections, and the ordered subset expectation and maximization algorithm is employed to reconstruct the cross-sectional image slices from the sinograms. Experiments were conducted using a customized capillary tube phantom and a micro hot rod phantom. Imaged at approximately 50 cm from the detector, hot rods in the phantom with diameters as small as 2.4 mm could be discerned in the reconstructed SPECT images. These results have demonstrated the feasibility of the authors’ 3D coded aperture image reconstruction algorithm for SPECT, representing an important step in their effort to develop a high sensitivity and high resolution SPECT imaging system. PMID:19544769

  19. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector.

    PubMed

    Amsden, Jason J; Herr, Philip J; Landry, David M W; Kim, William; Vyas, Raul; Parker, Charles B; Kirley, Matthew P; Keil, Adam D; Gilchrist, Kristin H; Radauscher, Erich J; Hall, Stephen D; Carlson, James B; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T; Russell, Zachary E; Grego, Sonia; Edwards, Steven J; Sperline, Roger P; Denton, M Bonner; Stoner, Brian R; Gehm, Michael E; Glass, Jeffrey T

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified. Graphical Abstract ᅟ.

  20. Proof of Concept Coded Aperture Miniature Mass Spectrometer Using a Cycloidal Sector Mass Analyzer, a Carbon Nanotube (CNT) Field Emission Electron Ionization Source, and an Array Detector

    NASA Astrophysics Data System (ADS)

    Amsden, Jason J.; Herr, Philip J.; Landry, David M. W.; Kim, William; Vyas, Raul; Parker, Charles B.; Kirley, Matthew P.; Keil, Adam D.; Gilchrist, Kristin H.; Radauscher, Erich J.; Hall, Stephen D.; Carlson, James B.; Baldasaro, Nicholas; Stokes, David; Di Dona, Shane T.; Russell, Zachary E.; Grego, Sonia; Edwards, Steven J.; Sperline, Roger P.; Denton, M. Bonner; Stoner, Brian R.; Gehm, Michael E.; Glass, Jeffrey T.

    2018-02-01

    Despite many potential applications, miniature mass spectrometers have had limited adoption in the field due to the tradeoff between throughput and resolution that limits their performance relative to laboratory instruments. Recently, a solution to this tradeoff has been demonstrated by using spatially coded apertures in magnetic sector mass spectrometers, enabling throughput and signal-to-background improvements of greater than an order of magnitude with no loss of resolution. This paper describes a proof of concept demonstration of a cycloidal coded aperture miniature mass spectrometer (C-CAMMS) demonstrating use of spatially coded apertures in a cycloidal sector mass analyzer for the first time. C-CAMMS also incorporates a miniature carbon nanotube (CNT) field emission electron ionization source and a capacitive transimpedance amplifier (CTIA) ion array detector. Results confirm the cycloidal mass analyzer's compatibility with aperture coding. A >10× increase in throughput was achieved without loss of resolution compared with a single slit instrument. Several areas where additional improvement can be realized are identified.

  1. Class of near-perfect coded apertures

    NASA Technical Reports Server (NTRS)

    Cannon, T. M.; Fenimore, E. E.

    1977-01-01

    Coded aperture imaging of gamma ray sources has long promised an improvement in the sensitivity of various detector systems. The promise has remained largely unfulfilled, however, for either one of two reasons. First, the encoding/decoding method produces artifacts, which even in the absence of quantum noise, restrict the quality of the reconstructed image. This is true of most correlation-type methods. Second, if the decoding procedure is of the deconvolution variety, small terms in the transfer function of the aperture can lead to excessive noise in the reconstructed image. It is proposed to circumvent both of these problems by use of a uniformly redundant array (URA) as the coded aperture in conjunction with a special correlation decoding method.

  2. Design of wavefront coding optical system with annular aperture

    NASA Astrophysics Data System (ADS)

    Chen, Xinhua; Zhou, Jiankang; Shen, Weimin

    2016-10-01

    Wavefront coding can extend the depth of field of traditional optical system by inserting a phase mask into the pupil plane. In this paper, the point spread function (PSF) of wavefront coding system with annular aperture are analyzed. Stationary phase method and fast Fourier transform (FFT) method are used to compute the diffraction integral respectively. The OTF invariance is analyzed for the annular aperture with cubic phase mask under different obscuration ratio. With these analysis results, a wavefront coding system using Maksutov-Cassegrain configuration is designed finally. It is an F/8.21 catadioptric system with annular aperture, and its focal length is 821mm. The strength of the cubic phase mask is optimized with user-defined operand in Zemax. The Wiener filtering algorithm is used to restore the images and the numerical simulation proves the validity of the design.

  3. Adaptive array antenna for satellite cellular and direct broadcast communications

    NASA Technical Reports Server (NTRS)

    Horton, Charles R.; Abend, Kenneth

    1993-01-01

    Adaptive phased-array antennas provide cost-effective implementation of large, light weight apertures with high directivity and precise beamshape control. Adaptive self-calibration allows for relaxation of all mechanical tolerances across the aperture and electrical component tolerances, providing high performance with a low-cost, lightweight array, even in the presence of large physical distortions. Beam-shape is programmable and adaptable to changes in technical and operational requirements. Adaptive digital beam-forming eliminates uplink contention by allowing a single electronically steerable antenna to service a large number of receivers with beams which adaptively focus on one source while eliminating interference from others. A large, adaptively calibrated and fully programmable aperture can also provide precise beam shape control for power-efficient direct broadcast from space. Advanced adaptive digital beamforming technologies are described for: (1) electronic compensation of aperture distortion, (2) multiple receiver adaptive space-time processing, and (3) downlink beam-shape control. Cost considerations for space-based array applications are also discussed.

  4. Evaluation of the cosmic-ray induced background in coded aperture high energy gamma-ray telescopes

    NASA Technical Reports Server (NTRS)

    Owens, Alan; Barbier, Loius M.; Frye, Glenn M.; Jenkins, Thomas L.

    1991-01-01

    While the application of coded-aperture techniques to high-energy gamma-ray astronomy offers potential arc-second angular resolution, concerns were raised about the level of secondary radiation produced in a thick high-z mask. A series of Monte-Carlo calculations are conducted to evaluate and quantify the cosmic-ray induced neutral particle background produced in a coded-aperture mask. It is shown that this component may be neglected, being at least a factor of 50 lower in intensity than the cosmic diffuse gamma-rays.

  5. A Programmable Liquid Collimator for Both Coded Aperture Adaptive Imaging and Multiplexed Compton Scatter Tomography

    DTIC Science & Technology

    2012-03-01

    environments where a source is either weak or shielded. A vehicle of this type could survey large areas after a nuclear attack or a nuclear reactor accident...to prevent its detection by γ-rays. The best application for unmanned vehicles is the detection of radioactive material after a nuclear reactor ...accident or a nuclear weapon detonation [70]. Whether by a nuclear detonation or a nuclear reactor accident, highly radioactive substances could be dis

  6. Multi-Aperture Digital Coherent Combining for Free-Space Optical Communication Receivers

    DTIC Science & Technology

    2016-04-21

    Distribution A: Public Release; unlimited distribution 2016 Optical Society of America OCIS codes: (060.1660) Coherent communications; (070.2025) Discrete ...Coherent combining algorithm Multi-aperture coherent combining enables using many discrete apertures together to create a large effective aperture. A

  7. Adaptive Control Of Woofer-Tweeter Adaptive Optics

    DTIC Science & Technology

    2009-03-01

    the actuator geometry and the matrix F describes the lowpass filter. The columns of T form a set of basis vectors in the space of the master...set equal to the simulated aperture size of 76 cm. The tweeter DM has 39 actuators across the aperture with a spacing of 2 cm for a total of 1521...actuators over the square aperture. The

  8. User's manual for CBS3DS, version 1.0

    NASA Astrophysics Data System (ADS)

    Reddy, C. J.; Deshpande, M. D.

    1995-10-01

    CBS3DS is a computer code written in FORTRAN 77 to compute the backscattering radar cross section of cavity backed apertures in infinite ground plane and slots in thick infinite ground plane. CBS3DS implements the hybrid Finite Element Method (FEM) and Method of Moments (MoM) techniques. This code uses the tetrahedral elements, with vector edge basis functions for FEM in the volume of the cavity/slot and the triangular elements with the basis functions for MoM at the apertures. By virtue of FEM, this code can handle any arbitrarily shaped three-dimensional cavities filled with inhomogeneous lossy materials; due to MoM, the apertures can be of any arbitrary shape. The User's Manual is written to make the user acquainted with the operation of the code. The user is assumed to be familiar with the FORTRAN 77 language and the operating environment of the computer the code is intended to run.

  9. Salient features of MACA and CMACA systems and their applications

    NASA Astrophysics Data System (ADS)

    Ratnam, C.; Goud, S. L.; Rao, V. Lakshmana

    2007-09-01

    The Fourier Analytical Investigation results of the Performance of the Multiple Annuli Coded Aperture (MACA) and Complementary Multiple Annuli Coded Aperture Systems (CMACA) are summarised and the probable application of these systems in Astronomy, High energy radiation Imaging, optical filters, and in the field of metallurgy, are suggested.

  10. On predicting contamination levels of HALOE optics aboard UARS using direct simulation Monte Carlo

    NASA Technical Reports Server (NTRS)

    Woronowicz, Michael S.; Rault, Didier F. G.

    1993-01-01

    A three-dimensional version of the direct simulation Monte Carlo method is adapted to assess the contamination environment surrounding a highly detailed model of the Upper Atmosphere Research Satellite. Emphasis is placed on simulating a realistic, worst-case set of flowfield and surface conditions and geometric orientations in order to estimate an upper limit for the cumulative level of volatile organic molecular deposits at the aperture of the Halogen Occultation Experiment. Problems resolving species outgassing and vent flux rates that varied over many orders of magnitude were handled using species weighting factors. Results relating to contaminant cloud structure, cloud composition, and statistics of simulated molecules impinging on the target surface are presented, along with data related to code performance. Using procedures developed in standard contamination analyses, the cumulative level of volatile organic deposits on HALOE's aperture over the instrument's 35-month nominal data collection period is estimated to be about 2700A.

  11. High quality adaptive optics zoom with adaptive lenses

    NASA Astrophysics Data System (ADS)

    Quintavalla, M.; Santiago, F.; Bonora, S.; Restaino, S.

    2018-02-01

    We present the combined use of large aperture adaptive lens with large optical power modulation with a multi actuator adaptive lens. The Multi-actuator Adaptive Lens (M-AL) can correct up to the 4th radial order of Zernike polynomials, without any obstructions (electrodes and actuators) placed inside its clear aperture. We demonstrated that the use of both lenses together can lead to better image quality and to the correction of aberrations of adaptive optics optical systems.

  12. Hard X-ray imaging from Explorer

    NASA Technical Reports Server (NTRS)

    Grindlay, J. E.; Murray, S. S.

    1981-01-01

    Coded aperture X-ray detectors were applied to obtain large increases in sensitivity as well as angular resolution. A hard X-ray coded aperture detector concept is described which enables very high sensitivity studies persistent hard X-ray sources and gamma ray bursts. Coded aperture imaging is employed so that approx. 2 min source locations can be derived within a 3 deg field of view. Gamma bursts were located initially to within approx. 2 deg and X-ray/hard X-ray spectra and timing, as well as precise locations, derived for possible burst afterglow emission. It is suggested that hard X-ray imaging should be conducted from an Explorer mission where long exposure times are possible.

  13. The AdaptiSPECT Imaging Aperture

    PubMed Central

    Chaix, Cécile; Moore, Jared W.; Van Holen, Roel; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    In this paper, we present the imaging aperture of an adaptive SPECT imaging system being developed at the Center for Gamma Ray Imaging (AdaptiSPECT). AdaptiSPECT is designed to automatically change its configuration in response to preliminary data, in order to improve image quality for a particular task. In a traditional pinhole SPECT imaging system, the characteristics (magnification, resolution, field of view) are set by the geometry of the system, and any modification can be accomplished only by manually changing the collimator and the distance of the detector to the center of the field of view. Optimization of the imaging system for a specific task on a specific individual is therefore difficult. In an adaptive SPECT imaging system, on the other hand, the configuration can be conveniently changed under computer control. A key component of an adaptive SPECT system is its aperture. In this paper, we present the design, specifications, and fabrication of the adaptive pinhole aperture that will be used for AdaptiSPECT, as well as the controls that enable autonomous adaptation. PMID:27019577

  14. Coded aperture solution for improving the performance of traffic enforcement cameras

    NASA Astrophysics Data System (ADS)

    Masoudifar, Mina; Pourreza, Hamid Reza

    2016-10-01

    A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.

  15. Development of a Coded Aperture X-Ray Backscatter Imager for Explosive Device Detection

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.; Rothschild, Richard E.; Leblanc, Philippe; McFee, John Elton

    2009-02-01

    Defence R&D Canada has an active research and development program on detection of explosive devices using nuclear methods. One system under development is a coded aperture-based X-ray backscatter imaging detector designed to provide sufficient speed, contrast and spatial resolution to detect antipersonnel landmines and improvised explosive devices. The successful development of a hand-held imaging detector requires, among other things, a light-weight, ruggedized detector with low power requirements, supplying high spatial resolution. The University of California, San Diego-designed HEXIS detector provides a modern, large area, high-temperature CZT imaging surface, robustly packaged in a light-weight housing with sound mechanical properties. Based on the potential for the HEXIS detector to be incorporated as the detection element of a hand-held imaging detector, the authors initiated a collaborative effort to demonstrate the capability of a coded aperture-based X-ray backscatter imaging detector. This paper will discuss the landmine and IED detection problem and review the coded aperture technique. Results from initial proof-of-principle experiments will then be reported.

  16. Imaging Analysis of the Hard X-Ray Telescope ProtoEXIST2 and New Techniques for High-Resolution Coded-Aperture Telescopes

    NASA Technical Reports Server (NTRS)

    Hong, Jaesub; Allen, Branden; Grindlay, Jonathan; Barthelmy, Scott D.

    2016-01-01

    Wide-field (greater than or approximately equal to 100 degrees squared) hard X-ray coded-aperture telescopes with high angular resolution (greater than or approximately equal to 2 minutes) will enable a wide range of time domain astrophysics. For instance, transient sources such as gamma-ray bursts can be precisely localized without the assistance of secondary focusing X-ray telescopes to enable rapid followup studies. On the other hand, high angular resolution in coded-aperture imaging introduces a new challenge in handling the systematic uncertainty: the average photon count per pixel is often too small to establish a proper background pattern or model the systematic uncertainty in a timescale where the model remains invariant. We introduce two new techniques to improve detection sensitivity, which are designed for, but not limited to, a high-resolution coded-aperture system: a self-background modeling scheme which utilizes continuous scan or dithering operations, and a Poisson-statistics based probabilistic approach to evaluate the significance of source detection without subtraction in handling the background. We illustrate these new imaging analysis techniques in high resolution coded-aperture telescope using the data acquired by the wide-field hard X-ray telescope ProtoEXIST2 during a high-altitude balloon flight in fall 2012. We review the imaging sensitivity of ProtoEXIST2 during the flight, and demonstrate the performance of the new techniques using our balloon flight data in comparison with a simulated ideal Poisson background.

  17. Room-temperature quantum noise limited spectrometry and methods of the same

    DOEpatents

    Stevens, Charles G.; Tringe, Joseph W.; Cunningham, Christopher Thomas

    2014-08-26

    In one embodiment, a heterodyne detection system for detecting light includes a first input aperture adapted for receiving first light from a scene input, a second input aperture adapted for receiving second light from a local oscillator input, a broadband local oscillator adapted for providing the second light to the second input aperture, a dispersive element adapted for dispersing the first light and the second light, and a final condensing lens coupled to an infrared detector. The final condensing lens is adapted for concentrating incident light from a primary condensing lens onto the infrared detector, and the infrared detector is a square-law detector capable of sensing the frequency difference between the first light and the second light. More systems and methods for detecting light are described according to other embodiments.

  18. Room-temperature quantum noise limited spectrometry and methods of the same

    DOEpatents

    Stevens, Charles G; Tringe, Joseph W

    2014-12-02

    In one embodiment, a heterodyne detection system for detecting light includes a first input aperture adapted for receiving a first light from a scene input, a second input aperture adapted for receiving a second light from a local oscillator input, a broadband local oscillator adapted for providing the second light to the second input aperture, a dispersive element adapted for dispersing the first light and the second light, and a final condensing lens coupled to an infrared detector. The final condensing lens is adapted for concentrating incident light from a primary condensing lens onto the detector, and the detector is a square-law detector capable of sensing the frequency difference between the first light and the second light. More systems and methods for detecting light are disclosed according to more embodiments.

  19. Coded aperture detector: an image sensor with sub 20-nm pixel resolution.

    PubMed

    Miyakawa, Ryan; Mayer, Rafael; Wojdyla, Antoine; Vannier, Nicolas; Lesser, Ian; Aron-Dine, Shifrah; Naulleau, Patrick

    2014-08-11

    We describe the coded aperture detector, a novel image sensor based on uniformly redundant arrays (URAs) with customizable pixel size, resolution, and operating photon energy regime. In this sensor, a coded aperture is scanned laterally at the image plane of an optical system, and the transmitted intensity is measured by a photodiode. The image intensity is then digitally reconstructed using a simple convolution. We present results from a proof-of-principle optical prototype, demonstrating high-fidelity image sensing comparable to a CCD. A 20-nm half-pitch URA fabricated by the Center for X-ray Optics (CXRO) nano-fabrication laboratory is presented that is suitable for high-resolution image sensing at EUV and soft X-ray wavelengths.

  20. Solar dynamic power for the Space Station

    NASA Technical Reports Server (NTRS)

    Archer, J. S.; Diamant, E. S.

    1986-01-01

    This paper describes a computer code which provides a significant advance in the systems analysis capabilities of solar dynamic power modules. While the code can be used to advantage in the preliminary analysis of terrestrial solar dynamic modules its real value lies in the adaptions which make it particularly useful for the conceptualization of optimized power modules for space applications. In particular, as illustrated in the paper, the code can be used to establish optimum values of concentrator diameter, concentrator surface roughness, concentrator rim angle and receiver aperture corresponding to the main heat cycle options - Organic Rankine and Brayton - and for certain receiver design options. The code can also be used to establish system sizing margins to account for the loss of reflectivity in orbit or the seasonal variation of insolation. By the simulation of the interactions among the major components of a solar dynamic module and through simplified formulations of the major thermal-optic-thermodynamic interactions the code adds a powerful, efficient and economic analytical tool to the repertory of techniques available for the design of advanced space power systems.

  1. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Single Input Multiple Output Technology.

    PubMed

    Chen, Shuo; Luo, Chenggao; Deng, Bin; Wang, Hongqiang; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-01-19

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. In this paper, we propose a three-dimensional (3D) TCAI architecture based on single input multiple output (SIMO) technology, which can reduce the coding and sampling times sharply. The coded aperture applied in the proposed TCAI architecture loads either purposive or random phase modulation factor. In the transmitting process, the purposive phase modulation factor drives the terahertz beam to scan the divided 3D imaging cells. In the receiving process, the random phase modulation factor is adopted to modulate the terahertz wave to be spatiotemporally independent for high resolution. Considering human-scale targets, images of each 3D imaging cell are reconstructed one by one to decompose the global computational complexity, and then are synthesized together to obtain the complete high-resolution image. As for each imaging cell, the multi-resolution imaging method helps to reduce the computational burden on a large-scale reference-signal matrix. The experimental results demonstrate that the proposed architecture can achieve high-resolution imaging with much less time for 3D targets and has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  2. Analysis on the optical aberration effect on spectral resolution of coded aperture spectroscopy

    NASA Astrophysics Data System (ADS)

    Hao, Peng; Chi, Mingbo; Wu, Yihui

    2017-10-01

    The coded aperture spectrometer can achieve high throughput and high spectral resolution by replacing the traditional single slit with two-dimensional array slits manufactured by MEMS technology. However, the sampling accuracy of coding spectrum image will be distorted due to the existence of system aberrations, machining error, fixing errors and so on, resulting in the declined spectral resolution. The influence factor of the spectral resolution come from the decode error, the spectral resolution of each column, and the column spectrum offset correction. For the Czerny-Turner spectrometer, the spectral resolution of each column most depend on the astigmatism, in this coded aperture spectroscopy, the uncorrected astigmatism does result in degraded performance. Some methods must be used to reduce or remove the limiting astigmatism. The curvature of field and the spectral curvature can be result in the spectrum revision errors.

  3. Method of Modeling and Simulation of Shaped External Occulters

    NASA Technical Reports Server (NTRS)

    Lyon, Richard G. (Inventor); Clampin, Mark (Inventor); Petrone, Peter, III (Inventor)

    2016-01-01

    The present invention relates to modeling an external occulter including: providing at least one processor executing program code to implement a simulation system, the program code including: providing an external occulter having a plurality of petals, the occulter being coupled to a telescope; and propagating light from the occulter to a telescope aperture of the telescope by scalar Fresnel propagation, by: obtaining an incident field strength at a predetermined wavelength at an occulter surface; obtaining a field propagation from the occulter to the telescope aperture using a Fresnel integral; modeling a celestial object at differing field angles by shifting a location of a shadow cast by the occulter on the telescope aperture; calculating an intensity of the occulter shadow on the telescope aperture; and applying a telescope aperture mask to a field of the occulter shadow, and propagating the light to a focal plane of the telescope via FFT techniques.

  4. Design and performance of coded aperture optical elements for the CESR-TA x-ray beam size monitor

    NASA Astrophysics Data System (ADS)

    Alexander, J. P.; Chatterjee, A.; Conolly, C.; Edwards, E.; Ehrlichman, M. P.; Flanagan, J. W.; Fontes, E.; Heltsley, B. K.; Lyndaker, A.; Peterson, D. P.; Rider, N. T.; Rubin, D. L.; Seeley, R.; Shanks, J.

    2014-12-01

    We describe the design and performance of optical elements for an x-ray beam size monitor (xBSM), a device measuring e+ and e- beam sizes in the CESR-TA storage ring. The device can measure vertical beam sizes of 10 - 100 μm on a turn-by-turn, bunch-by-bunch basis at e± beam energies of 2 - 5 GeV. x-rays produced by a hard-bend magnet pass through a single- or multiple-slit (coded aperture) optical element onto a detector. The coded aperture slit pattern and thickness of masking material forming that pattern can both be tuned for optimal resolving power. We describe several such optical elements and show how well predictions of simple models track measured performances.

  5. Monte Carlo simulation of ion-neutral charge exchange collisions and grid erosion in an ion thruster

    NASA Technical Reports Server (NTRS)

    Peng, Xiaohang; Ruyten, Wilhelmus M.; Keefer, Dennis

    1991-01-01

    A combined particle-in-cell (PIC)/Monte Carlo simulation model has been developed in which the PIC method is used to simulate the charge exchange collisions. It is noted that a number of features were reproduced correctly by this code, but that its assumption of two-dimensional axisymmetry for a single set of grid apertures precluded the reproduction of the most characteristic feature of actual test data; namely, the concentrated grid erosion at the geometric center of the hexagonal aperture array. The first results of a three-dimensional code, which takes into account the hexagonal symmetry of the grid, are presented. It is shown that, with this code, the experimentally observed erosion patterns are reproduced correctly, demonstrating explicitly the concentration of sputtering between apertures.

  6. Evaluation of coded aperture radiation detectors using a Bayesian approach

    NASA Astrophysics Data System (ADS)

    Miller, Kyle; Huggins, Peter; Labov, Simon; Nelson, Karl; Dubrawski, Artur

    2016-12-01

    We investigate tradeoffs arising from the use of coded aperture gamma-ray spectrometry to detect and localize sources of harmful radiation in the presence of noisy background. Using an example application scenario of area monitoring and search, we empirically evaluate weakly supervised spectral, spatial, and hybrid spatio-spectral algorithms for scoring individual observations, and two alternative methods of fusing evidence obtained from multiple observations. Results of our experiments confirm the intuition that directional information provided by spectrometers masked with coded aperture enables gains in source localization accuracy, but at the expense of reduced probability of detection. Losses in detection performance can however be to a substantial extent reclaimed by using our new spatial and spatio-spectral scoring methods which rely on realistic assumptions regarding masking and its impact on measured photon distributions.

  7. Signal-to-noise ratio of Singer product apertures

    NASA Astrophysics Data System (ADS)

    Shutler, Paul M. E.; Byard, Kevin

    2017-09-01

    Formulae for the signal-to-noise ratio (SNR) of Singer product apertures are derived, allowing optimal Singer product apertures to be identified, and the CPU time required to decode them is quantified. This allows a systematic comparison to be made of the performance of Singer product apertures against both conventionally wrapped Singer apertures, and also conventional product apertures such as square uniformly redundant arrays. For very large images, equivalently for images at very high resolution, the SNR of Singer product apertures is asymptotically as good as the best conventional apertures, but Singer product apertures decode faster than any conventional aperture by at least a factor of ten for image sizes up to several megapixels. These theoretical predictions are verified using numerical simulations, demonstrating that coded aperture video is for the first time a realistic possibility.

  8. Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope.

    PubMed

    Burns, Stephen A; Tumbar, Remy; Elsner, Ann E; Ferguson, Daniel; Hammer, Daniel X

    2007-05-01

    We describe the design and performance of an adaptive optics retinal imager that is optimized for use during dynamic correction for eye movements. The system incorporates a retinal tracker and stabilizer, a wide-field line scan scanning laser ophthalmoscope (SLO), and a high-resolution microelectromechanical-systems-based adaptive optics SLO. The detection system incorporates selection and positioning of confocal apertures, allowing measurement of images arising from different portions of the double pass retinal point-spread function (psf). System performance was excellent. The adaptive optics increased the brightness and contrast for small confocal apertures by more than 2x and decreased the brightness of images obtained with displaced apertures, confirming the ability of the adaptive optics system to improve the psf. The retinal image was stabilized to within 18 microm 90% of the time. Stabilization was sufficient for cross-correlation techniques to automatically align the images.

  9. Large Field of View, Modular, Stabilized, Adaptive-Optics-Based Scanning Laser Ophthalmoscope

    PubMed Central

    Burns, Stephen A.; Tumbar, Remy; Elsner, Ann E.; Ferguson, Daniel; Hammer, Daniel X.

    2007-01-01

    We describe the design and performance of an adaptive optics retinal imager that is optimized for use during dynamic correction for eye movements. The system incorporates a retinal tracker and stabilizer, a wide field line scan Scanning Laser Ophthalmocsope (SLO), and a high resolution MEMS based adaptive optics SLO. The detection system incorporates selection and positioning of confocal apertures, allowing measurement of images arising from different portions of the double pass retinal point spread function (psf). System performance was excellent. The adaptive optics increased the brightness and contrast for small confocal apertures by more than 2x, and decreased the brightness of images obtained with displaced apertures, confirming the ability of the adaptive optics system to improve the pointspread function. The retinal image was stabilized to within 18 microns 90% of the time. Stabilization was sufficient for cross-correlation techniques to automatically align the images. PMID:17429477

  10. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2016-01-01

    Abstract. A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice. PMID:26962543

  11. Design and implementation of coded aperture coherent scatter spectral imaging of cancerous and healthy breast tissue samples.

    PubMed

    Lakshmanan, Manu N; Greenberg, Joel A; Samei, Ehsan; Kapadia, Anuj J

    2016-01-01

    A scatter imaging technique for the differentiation of cancerous and healthy breast tissue in a heterogeneous sample is introduced in this work. Such a technique has potential utility in intraoperative margin assessment during lumpectomy procedures. In this work, we investigate the feasibility of the imaging method for tumor classification using Monte Carlo simulations and physical experiments. The coded aperture coherent scatter spectral imaging technique was used to reconstruct three-dimensional (3-D) images of breast tissue samples acquired through a single-position snapshot acquisition, without rotation as is required in coherent scatter computed tomography. We perform a quantitative assessment of the accuracy of the cancerous voxel classification using Monte Carlo simulations of the imaging system; describe our experimental implementation of coded aperture scatter imaging; show the reconstructed images of the breast tissue samples; and present segmentations of the 3-D images in order to identify the cancerous and healthy tissue in the samples. From the Monte Carlo simulations, we find that coded aperture scatter imaging is able to reconstruct images of the samples and identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) inside them with a cancerous voxel identification sensitivity, specificity, and accuracy of 92.4%, 91.9%, and 92.0%, respectively. From the experimental results, we find that the technique is able to identify cancerous and healthy tissue samples and reconstruct differential coherent scatter cross sections that are highly correlated with those measured by other groups using x-ray diffraction. Coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue inside samples within a time on the order of a minute per slice.

  12. Method for measuring the focal spot size of an x-ray tube using a coded aperture mask and a digital detector.

    PubMed

    Russo, Paolo; Mettivier, Giovanni

    2011-04-01

    The goal of this study is to evaluate a new method based on a coded aperture mask combined with a digital x-ray imaging detector for measurements of the focal spot sizes of diagnostic x-ray tubes. Common techniques for focal spot size measurements employ a pinhole camera, a slit camera, or a star resolution pattern. The coded aperture mask is a radiation collimator consisting of a large number of apertures disposed on a predetermined grid in an array, through which the radiation source is imaged onto a digital x-ray detector. The method of the coded mask camera allows one to obtain a one-shot accurate and direct measurement of the two dimensions of the focal spot (like that for a pinhole camera) but at a low tube loading (like that for a slit camera). A large number of small apertures in the coded mask operate as a "multipinhole" with greater efficiency than a single pinhole, but keeping the resolution of a single pinhole. X-ray images result from the multiplexed output on the detector image plane of such a multiple aperture array, and the image of the source is digitally reconstructed with a deconvolution algorithm. Images of the focal spot of a laboratory x-ray tube (W anode: 35-80 kVp; focal spot size of 0.04 mm) were acquired at different geometrical magnifications with two different types of digital detector (a photon counting hybrid silicon pixel detector with 0.055 mm pitch and a flat panel CMOS digital detector with 0.05 mm pitch) using a high resolution coded mask (type no-two-holes-touching modified uniformly redundant array) with 480 0.07 mm apertures, designed for imaging at energies below 35 keV. Measurements with a slit camera were performed for comparison. A test with a pinhole camera and with the coded mask on a computed radiography mammography unit with 0.3 mm focal spot was also carried out. The full width at half maximum focal spot sizes were obtained from the line profiles of the decoded images, showing a focal spot of 0.120 mm x 0.105 mm at 35 kVp and M = 6.1, with a detector entrance exposure as low as 1.82 mR (0.125 mA s tube load). The slit camera indicated a focal spot of 0.112 mm x 0.104 mm at 35 kVp and M = 3.15, with an exposure at the detector of 72 mR. Focal spot measurements with the coded mask could be performed up to 80 kVp. Tolerance to angular misalignment with the reference beam up to 7 degrees in in-plane rotations and 1 degrees deg in out-of-plane rotations was observed. The axial distance of the focal spot from the coded mask could also be determined. It is possible to determine the beam intensity via measurement of the intensity of the decoded image of the focal spot and via a calibration procedure. Coded aperture masks coupled to a digital area detector produce precise determinations of the focal spot of an x-ray tube with reduced tube loading and measurement time, coupled to a large tolerance in the alignment of the mask.

  13. A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.

    2018-04-01

    The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.

  14. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  15. Adaptive optics technique to overcome the turbulence in a large-aperture collimator.

    PubMed

    Mu, Quanquan; Cao, Zhaoliang; Li, Dayu; Hu, Lifa; Xuan, Li

    2008-03-20

    A collimator with a long focal length and large aperture is a very important apparatus for testing large-aperture optical systems. But it suffers from internal air turbulence, which may limit its performance and reduce the testing accuracy. To overcome this problem, an adaptive optics system is introduced to compensate for the turbulence. This system includes a liquid crystal on silicon device as a wavefront corrector and a Shack-Hartmann wavefront sensor. After correction, we can get a plane wavefront with rms of about 0.017 lambda (lambda=0.6328 microm) emitted out of a larger than 500 mm diameter aperture. The whole system reaches diffraction-limited resolution.

  16. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics

    PubMed Central

    Sowa, Katarzyna M.; Last, Arndt; Korecki, Paweł

    2017-01-01

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10–100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy. PMID:28322316

  17. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Sowa, Katarzyna M; Last, Arndt; Korecki, Paweł

    2017-03-21

    Polycapillary devices focus X-rays by means of multiple reflections of X-rays in arrays of bent glass capillaries. The size of the focal spot (typically 10-100 μm) limits the resolution of scanning, absorption and phase-contrast X-ray imaging using these devices. At the expense of a moderate resolution, polycapillary elements provide high intensity and are frequently used for X-ray micro-imaging with both synchrotrons and X-ray tubes. Recent studies have shown that the internal microstructure of such an optics can be used as a coded aperture that encodes high-resolution information about objects located inside the focal spot. However, further improvements to this variant of X-ray microscopy will require the challenging fabrication of tailored devices with a well-defined capillary microstructure. Here, we show that submicron coded aperture microscopy can be realized using a periodic grid that is placed at the output surface of a polycapillary optics. Grid-enhanced X-ray coded aperture microscopy with polycapillary optics does not rely on the specific microstructure of the optics but rather takes advantage only of its focusing properties. Hence, submicron X-ray imaging can be realized with standard polycapillary devices and existing set-ups for micro X-ray fluorescence spectroscopy.

  18. A panoramic coded aperture gamma camera for radioactive hotspots localization

    NASA Astrophysics Data System (ADS)

    Paradiso, V.; Amgarou, K.; Blanc De Lanaute, N.; Schoepff, V.; Amoyal, G.; Mahe, C.; Beltramello, O.; Liénard, E.

    2017-11-01

    A known disadvantage of the coded aperture imaging approach is its limited field-of-view (FOV), which often results insufficient when analysing complex dismantling scenes such as post-accidental scenarios, where multiple measurements are needed to fully characterize the scene. In order to overcome this limitation, a panoramic coded aperture γ-camera prototype has been developed. The system is based on a 1 mm thick CdTe detector directly bump-bonded to a Timepix readout chip, developed by the Medipix2 collaboration (256 × 256 pixels, 55 μm pitch, 14.08 × 14.08 mm2 sensitive area). A MURA pattern coded aperture is used, allowing for background subtraction without the use of heavy shielding. Such system is then combined with a USB color camera. The output of each measurement is a semi-spherical image covering a FOV of 360 degrees horizontally and 80 degrees vertically, rendered in spherical coordinates (θ,phi). The geometrical shapes of the radiation-emitting objects are preserved by first registering and stitching the optical images captured by the prototype, and applying, subsequently, the same transformations to their corresponding radiation images. Panoramic gamma images generated by using the technique proposed in this paper are described and discussed, along with the main experimental results obtained in laboratories campaigns.

  19. Hexagonal Uniformly Redundant Arrays (HURAs) for scintillator based coded aperture neutron imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gamage, K.A.A.; Zhou, Q.

    2015-07-01

    A series of Monte Carlo simulations have been conducted, making use of the EJ-426 neutron scintillator detector, to investigate the potential of using hexagonal uniformly redundant arrays (HURAs) for scintillator based coded aperture neutron imaging. This type of scintillator material has a low sensitivity to gamma rays, therefore, is of particular use in a system with a source that emits both neutrons and gamma rays. The simulations used an AmBe source, neutron images have been produced using different coded-aperture materials (boron- 10, cadmium-113 and gadolinium-157) and location error has also been estimated. In each case the neutron image clearly showsmore » the location of the source with a relatively small location error. Neutron images with high resolution can be easily used to identify and locate nuclear materials precisely in nuclear security and nuclear decommissioning applications. (authors)« less

  20. Secondary gamma-ray production in a coded aperture mask

    NASA Technical Reports Server (NTRS)

    Owens, A.; Frye, G. M., Jr.; Hall, C. J.; Jenkins, T. L.; Pendleton, G. N.; Carter, J. N.; Ramsden, D.; Agrinier, B.; Bonfand, E.; Gouiffes, C.

    1985-01-01

    The application of the coded aperture mask to high energy gamma-ray astronomy will provide the capability of locating a cosmic gamma-ray point source with a precision of a few arc-minutes above 20 MeV. Recent tests using a mask in conjunction with drift chamber detectors have shown that the expected point spread function is achieved over an acceptance cone of 25 deg. A telescope employing this technique differs from a conventional telescope only in that the presence of the mask modifies the radiation field in the vicinity of the detection plane. In addition to reducing the primary photon flux incident on the detector by absorption in the mask elements, the mask will also be a secondary radiator of gamma-rays. The various background components in a CAMTRAC (Coded Aperture Mask Track Chamber) telescope are considered. Monte-Carlo calculations are compared with recent measurements obtained using a prototype instrument in a tagged photon beam line.

  1. Medicine, material science and security: the versatility of the coded-aperture approach.

    PubMed

    Munro, P R T; Endrizzi, M; Diemoz, P C; Hagen, C K; Szafraniec, M B; Millard, T P; Zapata, C E; Speller, R D; Olivo, A

    2014-03-06

    The principal limitation to the widespread deployment of X-ray phase imaging in a variety of applications is probably versatility. A versatile X-ray phase imaging system must be able to work with polychromatic and non-microfocus sources (for example, those currently used in medical and industrial applications), have physical dimensions sufficiently large to accommodate samples of interest, be insensitive to environmental disturbances (such as vibrations and temperature variations), require only simple system set-up and maintenance, and be able to perform quantitative imaging. The coded-aperture technique, based upon the edge illumination principle, satisfies each of these criteria. To date, we have applied the technique to mammography, materials science, small-animal imaging, non-destructive testing and security. In this paper, we outline the theory of coded-aperture phase imaging and show an example of how the technique may be applied to imaging samples with a practically important scale.

  2. Fast-neutron, coded-aperture imager

    NASA Astrophysics Data System (ADS)

    Woolf, Richard S.; Phlips, Bernard F.; Hutcheson, Anthony L.; Wulf, Eric A.

    2015-06-01

    This work discusses a large-scale, coded-aperture imager for fast neutrons, building off a proof-of concept instrument developed at the U.S. Naval Research Laboratory (NRL). The Space Science Division at the NRL has a heritage of developing large-scale, mobile systems, using coded-aperture imaging, for long-range γ-ray detection and localization. The fast-neutron, coded-aperture imaging instrument, designed for a mobile unit (20 ft. ISO container), consists of a 32-element array of 15 cm×15 cm×15 cm liquid scintillation detectors (EJ-309) mounted behind a 12×12 pseudorandom coded aperture. The elements of the aperture are composed of 15 cm×15 cm×10 cm blocks of high-density polyethylene (HDPE). The arrangement of the aperture elements produces a shadow pattern on the detector array behind the mask. By measuring of the number of neutron counts per masked and unmasked detector, and with knowledge of the mask pattern, a source image can be deconvolved to obtain a 2-d location. The number of neutrons per detector was obtained by processing the fast signal from each PMT in flash digitizing electronics. Digital pulse shape discrimination (PSD) was performed to filter out the fast-neutron signal from the γ background. The prototype instrument was tested at an indoor facility at the NRL with a 1.8-μCi and 13-μCi 252Cf neutron/γ source at three standoff distances of 9, 15 and 26 m (maximum allowed in the facility) over a 15-min integration time. The imaging and detection capabilities of the instrument were tested by moving the source in half- and one-pixel increments across the image plane. We show a representative sample of the results obtained at one-pixel increments for a standoff distance of 9 m. The 1.8-μCi source was not detected at the 26-m standoff. In order to increase the sensitivity of the instrument, we reduced the fastneutron background by shielding the top, sides and back of the detector array with 10-cm-thick HDPE. This shielding configuration led to a reduction in the background by a factor of 1.7 and thus allowed for the detection and localization of the 1.8 μCi. The detection significance for each source at different standoff distances will be discussed.

  3. DMD-based implementation of patterned optical filter arrays for compressive spectral imaging.

    PubMed

    Rueda, Hoover; Arguello, Henry; Arce, Gonzalo R

    2015-01-01

    Compressive spectral imaging (CSI) captures multispectral imagery using fewer measurements than those required by traditional Shannon-Nyquist theory-based sensing procedures. CSI systems acquire coded and dispersed random projections of the scene rather than direct measurements of the voxels. To date, the coding procedure in CSI has been realized through the use of block-unblock coded apertures (CAs), commonly implemented as chrome-on-quartz photomasks. These apertures block or permit us to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. This paper extends the framework of CSI by replacing the traditional block-unblock photomasks by patterned optical filter arrays, referred to as colored coded apertures (CCAs). These, in turn, allow the source to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed CCAs are synthesized through linear combinations of low-pass, high-pass, and bandpass filters, paired with binary pattern ensembles realized by a digital micromirror device. The optical forward model of the proposed CSI architecture is presented along with a proof-of-concept implementation, which achieves noticeable improvements in the quality of the reconstruction.

  4. Coded aperture imaging with self-supporting uniformly redundant arrays. [Patent application

    DOEpatents

    Fenimore, E.E.

    1980-09-26

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput.

  5. Data port security lock

    DOEpatents

    Quinby, Joseph D [Albuquerque, NM; Hall, Clarence S [Albuquerque, NM

    2008-06-24

    In a security apparatus for securing an electrical connector, a plug may be fitted for insertion into a connector receptacle compliant with a connector standard. The plug has at least one aperture adapted to engage at least one latch in the connector receptacle. An engagement member is adapted to partially extend through at least one aperture and lock to at least one structure within the connector receptacle.

  6. Generating Artificial Reference Images for Open Loop Correlation Wavefront Sensors

    NASA Astrophysics Data System (ADS)

    Townson, M. J.; Love, G. D.; Saunter, C. D.

    2018-05-01

    Shack-Hartmann wavefront sensors for both solar and laser guide star adaptive optics (with elongated spots) need to observe extended objects. Correlation techniques have been successfully employed to measure the wavefront gradient in solar adaptive optics systems and have been proposed for laser guide star systems. In this paper we describe a method for synthesising reference images for correlation Shack-Hartmann wavefront sensors with a larger field of view than individual sub-apertures. We then show how these supersized reference images can increase the performance of correlation wavefront sensors in regimes where large relative shifts are induced between sub-apertures, such as those observed in open-loop wavefront sensors. The technique we describe requires no external knowledge outside of the wavefront-sensor images, making it available as an entirely "software" upgrade to an existing adaptive optics system. For solar adaptive optics we show the supersized reference images extend the magnitude of shifts which can be accurately measured from 12% to 50% of the field of view of a sub-aperture and in laser guide star wavefront sensors the magnitude of centroids that can be accurately measured is increased from 12% to 25% of the total field of view of the sub-aperture.

  7. Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems.

    PubMed

    Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu

    2018-02-01

    Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.

  8. Adaptive aperture for Geiger mode avalanche photodiode flash ladar systems

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Han, Shaokun; Xia, Wenze; Lei, Jieyu

    2018-02-01

    Although the Geiger-mode avalanche photodiode (GM-APD) flash ladar system offers the advantages of high sensitivity and simple construction, its detection performance is influenced not only by the incoming signal-to-noise ratio but also by the absolute number of noise photons. In this paper, we deduce a hyperbolic approximation to estimate the noise-photon number from the false-firing percentage in a GM-APD flash ladar system under dark conditions. By using this hyperbolic approximation function, we introduce a method to adapt the aperture to reduce the number of incoming background-noise photons. Finally, the simulation results show that the adaptive-aperture method decreases the false probability in all cases, increases the detection probability provided that the signal exceeds the noise, and decreases the average ranging error per frame.

  9. Digital equalization of time-delay array receivers on coherent laser communications.

    PubMed

    Belmonte, Aniceto

    2017-01-15

    Field conjugation arrays use adaptive combining techniques on multi-aperture receivers to improve the performance of coherent laser communication links by mitigating the consequences of atmospheric turbulence on the down-converted coherent power. However, this motivates the use of complex receivers as optical signals collected by different apertures need to be adaptively processed, co-phased, and scaled before they are combined. Here, we show that multiple apertures, coupled with optical delay lines, combine retarded versions of a signal at a single coherent receiver, which uses digital equalization to obtain diversity gain against atmospheric fading. We found in our analysis that, instead of field conjugation arrays, digital equalization of time-delay multi-aperture receivers is a simpler and more versatile approach to accomplish reduction of atmospheric fading.

  10. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  11. SolTrace Background | Concentrating Solar Power | NREL

    Science.gov Websites

    codes was written to model a very specific optical geometry, and each one built upon the others in an evolutionary way. Examples of such codes include: OPTDSH, a code written to model circular aperture parabolic

  12. Observations of meteor-head echoes using the Jicamarca 50MHz radar in interferometer mode

    NASA Astrophysics Data System (ADS)

    Chau, J. L.; Woodman, R. F.

    2004-03-01

    We present results of recent observations of meteor-head echoes obtained with the high-power large-aperture Jicamarca 50MHz radar (11.95°S, 76.87°W) in an interferometric mode. The large power-aperture of the system allows us to record more than 3000 meteors per hour in the small volume subtended by the 1° antenna beam, albeit when the cluttering equatorial electrojet (EEJ) echoes are not present or are very weak. The interferometry arrangement allows the determination of the radiant (trajectory) and speed of each meteor. It is found that the radiant distribution of all detected meteors is concentrated in relative small angles centered around the Earth's Apex as it transits over the Jicamarca sky, i.e. around the corresponding Earth heading for the particular observational day and time, for all seasons observed so far. The dispersion around the Apex is ~18° in a direction transverse to the Ecliptic plane and only 8.5° in heliocentric longitude in the Ecliptic plane both in the Earth inertial frame of reference. No appreciable interannual variability has been observed. Moreover, no population related to the optical (larger meteors) Leonid showers of 1998-2002 is found, in agreement with other large power-aperture radar observations.

    A novel cross-correlation detection technique (adaptive match-filtering) is used in combination with a 13 baud Barker phase-code. The technique allows us to get good range resolution (0.75km) without any sensitivity deterioration for the same average power, compared to the non-coded long pulse scheme used at other radars. The matching Doppler shift provides an estimation of the velocity within a pulse with the same accuracy as if a non-coded pulse of the same length had been used. The velocity distribution of the meteors is relatively narrow and centered around 60kms-1. Therefore most of the meteors have an almost circular retrograde orbit around the Sun. Less than 8% of the velocities correspond to interstellar orbits, i.e. with velocities larger than the solar escape velocity (72kms-1). Other statistical distributions of interest are also presented.

  13. Quasi-real-time end-to-end simulations of ELT-scale adaptive optics systems on GPUs

    NASA Astrophysics Data System (ADS)

    Gratadour, Damien

    2011-09-01

    Our team has started the development of a code dedicated to GPUs for the simulation of AO systems at the E-ELT scale. It uses the CUDA toolkit and an original binding to Yorick (an open source interpreted language) to provide the user with a comprehensive interface. In this paper we present the first performance analysis of our simulation code, showing its ability to provide Shack-Hartmann (SH) images and measurements at the kHz scale for VLT-sized AO system and in quasi-real-time (up to 70 Hz) for ELT-sized systems on a single top-end GPU. The simulation code includes multiple layers atmospheric turbulence generation, ray tracing through these layers, image formation at the focal plane of every sub-apertures of a SH sensor using either natural or laser guide stars and centroiding on these images using various algorithms. Turbulence is generated on-the-fly giving the ability to simulate hours of observations without the need of loading extremely large phase screens in the global memory. Because of its performance this code additionally provides the unique ability to test real-time controllers for future AO systems under nominal conditions.

  14. Apparatus and method to achieve high-resolution microscopy with non-diffracting or refracting radiation

    DOEpatents

    Tobin, Jr., Kenneth W.; Bingham, Philip R.; Hawari, Ayman I.

    2012-11-06

    An imaging system employing a coded aperture mask having multiple pinholes is provided. The coded aperture mask is placed at a radiation source to pass the radiation through. The radiation impinges on, and passes through an object, which alters the radiation by absorption and/or scattering. Upon passing through the object, the radiation is detected at a detector plane to form an encoded image, which includes information on the absorption and/or scattering caused by the material and structural attributes of the object. The encoded image is decoded to provide a reconstructed image of the object. Because the coded aperture mask includes multiple pinholes, the radiation intensity is greater than a comparable system employing a single pinhole, thereby enabling a higher resolution. Further, the decoding of the encoded image can be performed to generate multiple images of the object at different distances from the detector plane. Methods and programs for operating the imaging system are also disclosed.

  15. Average spectral efficiency analysis of FSO links over turbulence channel with adaptive transmissions and aperture averaging

    NASA Astrophysics Data System (ADS)

    Aarthi, G.; Ramachandra Reddy, G.

    2018-03-01

    In our paper, the impact of adaptive transmission schemes: (i) optimal rate adaptation (ORA) and (ii) channel inversion with fixed rate (CIFR) on the average spectral efficiency (ASE) are explored for free-space optical (FSO) communications with On-Off Keying (OOK), Polarization shift keying (POLSK), and Coherent optical wireless communication (Coherent OWC) systems under different turbulence regimes. Further to enhance the ASE we have incorporated aperture averaging effects along with the above adaptive schemes. The results indicate that ORA adaptation scheme has the advantage of improving the ASE performance compared with CIFR under moderate and strong turbulence regime. The coherent OWC system with ORA excels the other modulation schemes and could achieve ASE performance of 49.8 bits/s/Hz at the average transmitted optical power of 6 dBm under strong turbulence. By adding aperture averaging effect we could achieve an ASE of 50.5 bits/s/Hz under the same conditions. This makes ORA with Coherent OWC modulation as a favorable candidate for improving the ASE of the FSO communication system.

  16. Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 2: Appendix

    NASA Technical Reports Server (NTRS)

    Wiley, C. A.; Chang, M. U.

    1981-01-01

    A number of topics supporting the systems analysis of a multifrequency aperture-synthesizing microwave radiometer system are discussed. Fellgett's (multiple) advantage, interferometer mapping behavior, mapping geometry, image processing programs, and sampling errors are among the topics discussed. A FORTRAN program code is given.

  17. Lower Limits on Aperture Size for an ExoEarth Detecting Coronagraphic Mission

    NASA Technical Reports Server (NTRS)

    Stark, Christopher C.; Roberge, Aki; Mandell, Avi; Clampin, Mark; Domagal-Goldman, Shawn D.; McElwain, Michael W.; Stapelfeldt, Karl R.

    2015-01-01

    The yield of Earth-like planets will likely be a primary science metric for future space-based missions that will drive telescope aperture size. Maximizing the exoEarth candidate yield is therefore critical to minimizing the required aperture. Here we describe a method for exoEarth candidate yield maximization that simultaneously optimizes, for the first time, the targets chosen for observation, the number of visits to each target, the delay time between visits, and the exposure time of every observation. This code calculates both the detection time and multiwavelength spectral characterization time required for planets. We also refine the astrophysical assumptions used as inputs to these calculations, relying on published estimates of planetary occurrence rates as well as theoretical and observational constraints on terrestrial planet sizes and classical habitable zones. Given these astrophysical assumptions, optimistic telescope and instrument assumptions, and our new completeness code that produces the highest yields to date, we suggest lower limits on the aperture size required to detect and characterize a statistically motivated sample of exoEarths.

  18. Progress in NEXT Ion Optics Modeling

    NASA Technical Reports Server (NTRS)

    Emhoff, Jerold W.; Boyd, Iain D.

    2004-01-01

    Results are presented from an ion optics simulation code applied to the NEXT ion thruster geometry. The error in the potential field solver of the code is characterized, and methods and requirements for reducing this error are given. Results from a study on electron backstreaming using the improved field solver are given and shown to compare much better to experimental results than previous studies. Results are also presented on a study of the beamlet behavior in the outer radial apertures of the NEXT thruster. The low beamlet currents in this region allow over-focusing of the beam, causing direct impingement of ions on the accelerator grid aperture wall. Different possibilities for reducing this direct impingement are analyzed, with the conclusion that, of the methods studied, decreasing the screen grid aperture diameter eliminates direct impingement most effectively.

  19. Accelerator test of the coded aperture mask technique for gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.

    1982-01-01

    A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.

  20. Dual-sided coded-aperture imager

    DOEpatents

    Ziock, Klaus-Peter [Clinton, TN

    2009-09-22

    In a vehicle, a single detector plane simultaneously measures radiation coming through two coded-aperture masks, one on either side of the detector. To determine which side of the vehicle a source is, the two shadow masks are inverses of each other, i.e., one is a mask and the other is the anti-mask. All of the data that is collected is processed through two versions of an image reconstruction algorithm. One treats the data as if it were obtained through the mask, the other as though the data is obtained through the anti-mask.

  1. MO-F-CAMPUS-I-04: Characterization of Fan Beam Coded Aperture Coherent Scatter Spectral Imaging Methods for Differentiation of Normal and Neoplastic Breast Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R; Albanese, K; Lakshmanan, M

    Purpose: This study intends to characterize the spectral and spatial resolution limits of various fan beam geometries for differentiation of normal and neoplastic breast structures via coded aperture coherent scatter spectral imaging techniques. In previous studies, pencil beam raster scanning methods using coherent scatter computed tomography and selected volume tomography have yielded excellent results for tumor discrimination. However, these methods don’t readily conform to clinical constraints; primarily prolonged scan times and excessive dose to the patient. Here, we refine a fan beam coded aperture coherent scatter imaging system to characterize the tradeoffs between dose, scan time and image quality formore » breast tumor discrimination. Methods: An X-ray tube (125kVp, 400mAs) illuminated the sample with collimated fan beams of varying widths (3mm to 25mm). Scatter data was collected via two linear-array energy-sensitive detectors oriented parallel and perpendicular to the beam plane. An iterative reconstruction algorithm yields images of the sample’s spatial distribution and respective spectral data for each location. To model in-vivo tumor analysis, surgically resected breast tumor samples were used in conjunction with lard, which has a form factor comparable to adipose (fat). Results: Quantitative analysis with current setup geometry indicated optimal performance for beams up to 10mm wide, with wider beams producing poorer spatial resolution. Scan time for a fixed volume was reduced by a factor of 6 when scanned with a 10mm fan beam compared to a 1.5mm pencil beam. Conclusion: The study demonstrates the utility of fan beam coherent scatter spectral imaging for differentiation of normal and neoplastic breast tissues has successfully reduced dose and scan times whilst sufficiently preserving spectral and spatial resolution. Future work to alter the coded aperture and detector geometries could potentially allow the use of even wider fans, thereby making coded aperture coherent scatter imaging a clinically viable method for breast cancer detection. United States Department of Homeland Security; Duke University Medical Center - Department of Radiology; Carl E Ravin Advanced Imaging Laboratories; Duke University Medical Physics Graduate Program.« less

  2. Thermal Neutron Imaging Using A New Pad-Based Position Sensitive Neutron Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dioszegi I.; Vanier P.E.; Salwen C.

    2016-10-29

    Thermal neutrons (with mean energy of 25 meV) have a scattering mean free path of about 20 m in air. Therefore it is feasible to find localized thermal neutron sources up to ~30 m standoff distance using thermal neutron imaging. Coded aperture thermal neutron imaging was developed in our laboratory in the nineties, using He-3 filled wire chambers. Recently a new generation of coded-aperture neutron imagers has been developed. In the new design the ionization chamber has anode and cathode planes, where the anode is composed of an array of individual pads. The charge is collected on each of themore » individual 5x5 mm2 anode pads, (48x48 in total, corresponding to 24x24 cm2 sensitive area) and read out by application specific integrated circuits (ASICs). The high sensitivity of the ASICs allows unity gain operation mode. The new design has several advantages for field deployable imaging applications, compared to the previous generation of wire-grid based neutron detectors. Among these are the rugged design, lighter weight and use of non-flammable stopping gas. For standoff localization of thermalized neutron sources a low resolution (11x11 pixel) coded aperture mask has been fabricated. Using the new larger area detector and the coarse resolution mask we performed several standoff experiments using moderated californium and plutonium sources at Idaho National Laboratory. In this paper we will report on the development and performance of the new pad-based neutron camera, and present long range coded-aperture images of various thermalized neutron sources.« less

  3. Galaxy And Mass Assembly: accurate panchromatic photometry from optical priors using LAMBDAR

    NASA Astrophysics Data System (ADS)

    Wright, A. H.; Robotham, A. S. G.; Bourne, N.; Driver, S. P.; Dunne, L.; Maddox, S. J.; Alpaslan, M.; Andrews, S. K.; Bauer, A. E.; Bland-Hawthorn, J.; Brough, S.; Brown, M. J. I.; Clarke, C.; Cluver, M.; Davies, L. J. M.; Grootes, M. W.; Holwerda, B. W.; Hopkins, A. M.; Jarrett, T. H.; Kafle, P. R.; Lange, R.; Liske, J.; Loveday, J.; Moffett, A. J.; Norberg, P.; Popescu, C. C.; Smith, M.; Taylor, E. N.; Tuffs, R. J.; Wang, L.; Wilkins, S. M.

    2016-07-01

    We present the Lambda Adaptive Multi-Band Deblending Algorithm in R (LAMBDAR), a novel code for calculating matched aperture photometry across images that are neither pixel- nor PSF-matched, using prior aperture definitions derived from high-resolution optical imaging. The development of this program is motivated by the desire for consistent photometry and uncertainties across large ranges of photometric imaging, for use in calculating spectral energy distributions. We describe the program, specifically key features required for robust determination of panchromatic photometry: propagation of apertures to images with arbitrary resolution, local background estimation, aperture normalization, uncertainty determination and propagation, and object deblending. Using simulated images, we demonstrate that the program is able to recover accurate photometric measurements in both high-resolution, low-confusion, and low-resolution, high-confusion, regimes. We apply the program to the 21-band photometric data set from the Galaxy And Mass Assembly (GAMA) Panchromatic Data Release (PDR; Driver et al. 2016), which contains imaging spanning the far-UV to the far-IR. We compare photometry derived from LAMBDAR with that presented in Driver et al. (2016), finding broad agreement between the data sets. None the less, we demonstrate that the photometry from LAMBDAR is superior to that from the GAMA PDR, as determined by a reduction in the outlier rate and intrinsic scatter of colours in the LAMBDAR data set. We similarly find a decrease in the outlier rate of stellar masses and star formation rates using LAMBDAR photometry. Finally, we note an exceptional increase in the number of UV and mid-IR sources able to be constrained, which is accompanied by a significant increase in the mid-IR colour-colour parameter-space able to be explored.

  4. Solar Adaptive Optics.

    PubMed

    Rimmele, Thomas R; Marino, Jose

    Adaptive optics (AO) has become an indispensable tool at ground-based solar telescopes. AO enables the ground-based observer to overcome the adverse effects of atmospheric seeing and obtain diffraction limited observations. Over the last decade adaptive optics systems have been deployed at major ground-based solar telescopes and revitalized ground-based solar astronomy. The relatively small aperture of solar telescopes and the bright source make solar AO possible for visible wavelengths where the majority of solar observations are still performed. Solar AO systems enable diffraction limited observations of the Sun for a significant fraction of the available observing time at ground-based solar telescopes, which often have a larger aperture than equivalent space based observatories, such as HINODE. New ground breaking scientific results have been achieved with solar adaptive optics and this trend continues. New large aperture telescopes are currently being deployed or are under construction. With the aid of solar AO these telescopes will obtain observations of the highly structured and dynamic solar atmosphere with unprecedented resolution. This paper reviews solar adaptive optics techniques and summarizes the recent progress in the field of solar adaptive optics. An outlook to future solar AO developments, including a discussion of Multi-Conjugate AO (MCAO) and Ground-Layer AO (GLAO) will be given. Supplementary material is available for this article at 10.12942/lrsp-2011-2.

  5. Use of atmospheric backscattering for adaptive formation of the initial wave front of a laser beam by the method of aperture sensing

    NASA Astrophysics Data System (ADS)

    Gordeev, E. V.; Kuskov, V. V.; Razenkov, I. A.; Shesternin, A. N.

    2017-11-01

    The quality of adaptive suppression of initial aberrations of the wave front of a main laser beam with the use of the method of aperture sensing by the signal of atmospheric backscattering of the additional (sensing) laser radiation at a different wavelength has been studied experimentally. It is shown that wavefront distortions of the main laser beam were decreased significantly during the setup operation.

  6. System optimization on coded aperture spectrometer

    NASA Astrophysics Data System (ADS)

    Liu, Hua; Ding, Quanxin; Wang, Helong; Chen, Hongliang; Guo, Chunjie; Zhou, Liwei

    2017-10-01

    For aim to find a simple multiple configuration solution and achieve higher refractive efficiency, and based on to reduce the situation disturbed by FOV change, especially in a two-dimensional spatial expansion. Coded aperture system is designed by these special structure, which includes an objective a coded component a prism reflex system components, a compensatory plate and an imaging lens Correlative algorithms and perfect imaging methods are available to ensure this system can be corrected and optimized adequately. Simulation results show that the system can meet the application requirements in MTF, REA, RMS and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration.

  7. Dual-camera design for coded aperture snapshot spectral imaging.

    PubMed

    Wang, Lizhi; Xiong, Zhiwei; Gao, Dahua; Shi, Guangming; Wu, Feng

    2015-02-01

    Coded aperture snapshot spectral imaging (CASSI) provides an efficient mechanism for recovering 3D spectral data from a single 2D measurement. However, since the reconstruction problem is severely underdetermined, the quality of recovered spectral data is usually limited. In this paper we propose a novel dual-camera design to improve the performance of CASSI while maintaining its snapshot advantage. Specifically, a beam splitter is placed in front of the objective lens of CASSI, which allows the same scene to be simultaneously captured by a grayscale camera. This uncoded grayscale measurement, in conjunction with the coded CASSI measurement, greatly eases the reconstruction problem and yields high-quality 3D spectral data. Both simulation and experimental results demonstrate the effectiveness of the proposed method.

  8. Simulation of image formation in x-ray coded aperture microscopy with polycapillary optics.

    PubMed

    Korecki, P; Roszczynialski, T P; Sowa, K M

    2015-04-06

    In x-ray coded aperture microscopy with polycapillary optics (XCAMPO), the microstructure of focusing polycapillary optics is used as a coded aperture and enables depth-resolved x-ray imaging at a resolution better than the focal spot dimensions. Improvements in the resolution and development of 3D encoding procedures require a simulation model that can predict the outcome of XCAMPO experiments. In this work we introduce a model of image formation in XCAMPO which enables calculation of XCAMPO datasets for arbitrary positions of the object relative to the focal plane as well as to incorporate optics imperfections. In the model, the exit surface of the optics is treated as a micro-structured x-ray source that illuminates a periodic object. This makes it possible to express the intensity of XCAMPO images as a convolution series and to perform simulations by means of fast Fourier transforms. For non-periodic objects, the model can be applied by enforcing artificial periodicity and setting the spatial period larger then the field-of-view. Simulations are verified by comparison with experimental data.

  9. Detection of Explosive Devices using X-ray Backscatter Radiation

    NASA Astrophysics Data System (ADS)

    Faust, Anthony A.

    2002-09-01

    It is our goal to develop a coded aperture based X-ray backscatter imaging detector that will provide sufficient speed, contrast and spatial resolution to detect Antipersonnel Landmines and Improvised Explosive Devices (IED). While our final objective is to field a hand-held detector, we have currently constrained ourselves to a design that can be fielded on a small robotic platform. Coded aperture imaging has been used by the observational gamma astronomy community for a number of years. However, it has been the recent advances in the field of medical nuclear imaging which has allowed for the application of the technique to a backscatter scenario. In addition, driven by requirements in medical applications, advances in X-ray detection are continually being made, and detectors are now being produced that are faster, cheaper and lighter than those only a decade ago. With these advances, a coded aperture hand-held imaging system has only recently become a possibility. This paper will begin with an introduction to the technique, identify recent advances which have made this approach possible, present a simulated example case, and conclude with a discussion on future work.

  10. Coded aperture imaging with self-supporting uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.

    1983-01-01

    A self-supporting uniformly redundant array pattern for coded aperture imaging. The present invention utilizes holes which are an integer times smaller in each direction than holes in conventional URA patterns. A balance correlation function is generated where holes are represented by 1's, nonholes are represented by -1's, and supporting area is represented by 0's. The self-supporting array can be used for low energy applications where substrates would greatly reduce throughput. The balance correlation response function for the self-supporting array pattern provides an accurate representation of the source of nonfocusable radiation.

  11. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  12. Perceiving Affordances for Fitting through Apertures

    ERIC Educational Resources Information Center

    Ishak, Shaziela; Adolph, Karen E.; Lin, Grace C.

    2008-01-01

    Affordances--possibilities for action--are constrained by the match between actors and their environments. For motor decisions to be adaptive, affordances must be detected accurately. Three experiments examined the correspondence between motor decisions and affordances as participants reached through apertures of varying size. A psychophysical…

  13. Edge detection based on adaptive threshold b-spline wavelet for optical sub-aperture measuring

    NASA Astrophysics Data System (ADS)

    Zhang, Shiqi; Hui, Mei; Liu, Ming; Zhao, Zhu; Dong, Liquan; Liu, Xiaohua; Zhao, Yuejin

    2015-08-01

    In the research of optical synthetic aperture imaging system, phase congruency is the main problem and it is necessary to detect sub-aperture phase. The edge of the sub-aperture system is more complex than that in the traditional optical imaging system. And with the existence of steep slope for large-aperture optical component, interference fringe may be quite dense when interference imaging. Deep phase gradient may cause a loss of phase information. Therefore, it's urgent to search for an efficient edge detection method. Wavelet analysis as a powerful tool is widely used in the fields of image processing. Based on its properties of multi-scale transform, edge region is detected with high precision in small scale. Longing with the increase of scale, noise is reduced in contrary. So it has a certain suppression effect on noise. Otherwise, adaptive threshold method which sets different thresholds in various regions can detect edge points from noise. Firstly, fringe pattern is obtained and cubic b-spline wavelet is adopted as the smoothing function. After the multi-scale wavelet decomposition of the whole image, we figure out the local modulus maxima in gradient directions. However, it also contains noise, and thus adaptive threshold method is used to select the modulus maxima. The point which greater than threshold value is boundary point. Finally, we use corrosion and expansion deal with the resulting image to get the consecutive boundary of image.

  14. Low-Cost Large Aperture Telescopes for Optical Communications

    NASA Technical Reports Server (NTRS)

    Hemmati, Hamid

    2006-01-01

    Low-cost, 0.5-1 meter ground apertures are required for near-Earth laser communications. Low-cost ground apertures with equivalent diameters greater than 10 meters are desired for deep-space communications. This presentation focuses on identifying schemes to lower the cost of constructing networks of large apertures while continuing to meet the requirements for laser communications. The primary emphasis here is on the primary mirror. A slumped glass spherical mirror, along with passive secondary mirror corrector and active adaptive optic corrector show promise as a low-cost alternative to large diameter monolithic apertures. To verify the technical performance and cost estimate, development of a 1.5-meter telescope equipped with gimbal and dome is underway.

  15. Large Coded Aperture Mask for Spaceflight Hard X-ray Images

    NASA Technical Reports Server (NTRS)

    Vigneau, Danielle N.; Robinson, David W.

    2002-01-01

    The 2.6 square meter coded aperture mask is a vital part of the Burst Alert Telescope on the Swift mission. A random, but known pattern of more than 50,000 lead tiles, each 5 mm square, was bonded to a large honeycomb panel which projects a shadow on the detector array during a gamma ray burst. A two-year development process was necessary to explore ideas, apply techniques, and finalize procedures to meet the strict requirements for the coded aperture mask. Challenges included finding a honeycomb substrate with minimal gamma ray attenuation, selecting an adhesive with adequate bond strength to hold the tiles in place but soft enough to allow the tiles to expand and contract without distorting the panel under large temperature gradients, and eliminating excess adhesive from all untiled areas. The largest challenge was to find an efficient way to bond the > 50,000 lead tiles to the panel with positional tolerances measured in microns. In order to generate the desired bondline, adhesive was applied and allowed to cure to each tile. The pre-cured tiles were located in a tool to maintain positional accuracy, wet adhesive was applied to the panel, and it was lowered to the tile surface with synchronized actuators. Using this procedure, the entire tile pattern was transferred to the large honeycomb panel in a single bond. The pressure for the bond was achieved by enclosing the entire system in a vacuum bag. Thermal vacuum and acoustic tests validated this approach. This paper discusses the methods, materials, and techniques used to fabricate this very large and unique coded aperture mask for the Swift mission.

  16. Self characterization of a coded aperture array for neutron source imaging

    NASA Astrophysics Data System (ADS)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; Guler, N.; Merrill, F. E.; Wilde, C. H.

    2014-12-01

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning deuterium-tritium plasma during the stagnation stage of inertial confinement fusion implosions. Since the neutron source is small (˜100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be precisely aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.

  17. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Men Chunhua; Romeijn, H. Edwin; Jia Xun

    2010-11-15

    Purpose: To develop a novel aperture-based algorithm for volumetric modulated arc therapy (VMAT) treatment plan optimization with high quality and high efficiency. Methods: The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequentialmore » way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. Results: The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. Conclusions: The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.« less

  18. Ultrafast treatment plan optimization for volumetric modulated arc therapy (VMAT).

    PubMed

    Men, Chunhua; Romeijn, H Edwin; Jia, Xun; Jiang, Steve B

    2010-11-01

    To develop a novel aperture-based algorithm for volumetric modulated are therapy (VMAT) treatment plan optimization with high quality and high efficiency. The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. The authors consider a cost function consisting two terms, the first enforcing a desired dose distribution and the second guaranteeing a smooth dose rate variation between successive gantry angles. A gantry rotation is discretized into 180 beam angles and for each beam angle, only one MLC aperture is allowed. The apertures are generated one by one in a sequential way. At each iteration of the column generation method, a deliverable MLC aperture is generated for one of the unoccupied beam angles by solving a subproblem with the consideration of MLC mechanic constraints. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. When all 180 beam angles are occupied, the optimization completes, yielding a set of deliverable apertures and associated dose rates that produce a high quality plan. The algorithm was preliminarily tested on five prostate and five head-and-neck clinical cases, each with one full gantry rotation without any couch/collimator rotations. High quality VMAT plans have been generated for all ten cases with extremely high efficiency. It takes only 5-8 min on CPU (MATLAB code on an Intel Xeon 2.27 GHz CPU) and 18-31 s on GPU (CUDA code on an NVIDIA Tesla C1060 GPU card) to generate such plans. The authors have developed an aperture-based VMAT optimization algorithm which can generate clinically deliverable high quality treatment plans at very high efficiency.

  19. Spiders in Lyot Coronagraphs

    NASA Astrophysics Data System (ADS)

    Sivaramakrishnan, Anand; Lloyd, James P.

    2005-11-01

    In principle, suppression of on-axis stellar light by a coronagraph is easier on an unobscured aperture telescope than on one with an obscured aperture. Recent designs such as the apodized pupil Lyot coronagraph, the ``band-limited'' Lyot coronagraph, and several variants of phase-mask coronagraphs work best on unobscured circular aperture telescopes. These designs were developed to enable the discovery and characterization of nearby Jovian or even terrestrial exoplanets. All of today's major space-based and adaptive optics-equipped ground-based telescopes are obscured-aperture systems with a secondary mirror held in place by secondary support ``spider'' vanes. The presence of a secondary obscuration can be dealt with by ingenious coronagraph designs, but the spider vanes themselves cause diffracted light, which can hamper the search for Jovian exoplanets around nearby stars. We look at the problem of suppressing spider vane diffraction in Lyot coronagraphs, including apodized pupil and band-limited designs. We show how spider vane diffraction can be reduced drastically and in fact contained in the final coronagraphic image, within one resolution element of the geometric image of the focal plane mask's occulting spot. This makes adaptive optics coronagraphic searches for exojupiters possible with the next generation of adaptive optics systems being developed for 8-10 m class telescopes such as Gemini and the Very Large Telescopes.

  20. TRACKING SIMULATIONS NEAR HALF-INTEGER RESONANCE AT PEP-II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nosochkov, Yuri

    2003-05-13

    Beam-beam simulations predict that PEP-II luminosity can be increased by operating the horizontal betatron tune near and above a half-integer resonance. However, effects of the resonance and its synchrotron sidebands significantly enhance betatron and chromatic perturbations which tend to reduce dynamic aperture. In the study, chromatic variation of horizontal tune near the resonance was minimized by optimizing local sextupoles in the Interaction Region. Dynamic aperture was calculated using tracking simulations in LEGO code. Dependence of dynamic aperture on the residual orbit, dispersion and {beta} distortion after correction was investigated.

  1. A broad band X-ray imaging spectrophotometer for astrophysical studies

    NASA Technical Reports Server (NTRS)

    Lum, Kenneth S. K.; Lee, Dong Hwan; Ku, William H.-M.

    1988-01-01

    A broadband X-ray imaging spectrophotometer (BBXRIS) has been built for astrophysical studies. The BBXRIS is based on a large-imaging gas scintillation proportional counter (LIGSPC), a combination of a gas scintillation proportional counter and a multiwire proportional counter, which achieves 8 percent (FWHM) energy resolution and 1.5-mm (FWHM) spatial resolution at 5.9 keV. The LIGSPC can be integrated with a grazing incidence mirror and a coded aperture mask to provide imaging over a broad range of X-ray energies. The results of tests involving the LIGSPC and a coded aperture mask are presented, and possible applications of the BBXRIS are discussed.

  2. Motion-based prediction is sufficient to solve the aperture problem

    PubMed Central

    Perrinet, Laurent U; Masson, Guillaume S

    2012-01-01

    In low-level sensory systems, it is still unclear how the noisy information collected locally by neurons may give rise to a coherent global percept. This is well demonstrated for the detection of motion in the aperture problem: as luminance of an elongated line is symmetrical along its axis, tangential velocity is ambiguous when measured locally. Here, we develop the hypothesis that motion-based predictive coding is sufficient to infer global motion. Our implementation is based on a context-dependent diffusion of a probabilistic representation of motion. We observe in simulations a progressive solution to the aperture problem similar to physiology and behavior. We demonstrate that this solution is the result of two underlying mechanisms. First, we demonstrate the formation of a tracking behavior favoring temporally coherent features independently of their texture. Second, we observe that incoherent features are explained away while coherent information diffuses progressively to the global scale. Most previous models included ad-hoc mechanisms such as end-stopped cells or a selection layer to track specific luminance-based features as necessary conditions to solve the aperture problem. Here, we have proved that motion-based predictive coding, as it is implemented in this functional model, is sufficient to solve the aperture problem. This solution may give insights in the role of prediction underlying a large class of sensory computations. PMID:22734489

  3. Review of particle-in-cell modeling for the extraction region of large negative hydrogen ion sources for fusion

    NASA Astrophysics Data System (ADS)

    Wünderlich, D.; Mochalskyy, S.; Montellano, I. M.; Revel, A.

    2018-05-01

    Particle-in-cell (PIC) codes are used since the early 1960s for calculating self-consistently the motion of charged particles in plasmas, taking into account external electric and magnetic fields as well as the fields created by the particles itself. Due to the used very small time steps (in the order of the inverse plasma frequency) and mesh size, the computational requirements can be very high and they drastically increase with increasing plasma density and size of the calculation domain. Thus, usually small computational domains and/or reduced dimensionality are used. In the last years, the available central processing unit (CPU) power strongly increased. Together with a massive parallelization of the codes, it is now possible to describe in 3D the extraction of charged particles from a plasma, using calculation domains with an edge length of several centimeters, consisting of one extraction aperture, the plasma in direct vicinity of the aperture, and a part of the extraction system. Large negative hydrogen or deuterium ion sources are essential parts of the neutral beam injection (NBI) system in future fusion devices like the international fusion experiment ITER and the demonstration reactor (DEMO). For ITER NBI RF driven sources with a source area of 0.9 × 1.9 m2 and 1280 extraction apertures will be used. The extraction of negative ions is accompanied by the co-extraction of electrons which are deflected onto an electron dump. Typically, the maximum negative extracted ion current is limited by the amount and the temporal instability of the co-extracted electrons, especially for operation in deuterium. Different PIC codes are available for the extraction region of large driven negative ion sources for fusion. Additionally, some effort is ongoing in developing codes that describe in a simplified manner (coarser mesh or reduced dimensionality) the plasma of the whole ion source. The presentation first gives a brief overview of the current status of the ion source development for ITER NBI and of the PIC method. Different PIC codes for the extraction region are introduced as well as the coupling to codes describing the whole source (PIC codes or fluid codes). Presented and discussed are different physical and numerical aspects of applying PIC codes to negative hydrogen ion sources for fusion as well as selected code results. The main focus of future calculations will be the meniscus formation and identifying measures for reducing the co-extracted electrons, in particular for deuterium operation. The recent results of the 3D PIC code ONIX (calculation domain: one extraction aperture and its vicinity) for the ITER prototype source (1/8 size of the ITER NBI source) are presented.

  4. Electrospray device

    NASA Technical Reports Server (NTRS)

    Demmons, Nathaniel (Inventor); Roy, Thomas (Inventor); Spence, Douglas (Inventor); Martin, Roy (Inventor); Hruby, Vladimir (Inventor); Ehrbar, Eric (Inventor); Zwahlen, Jurg (Inventor)

    2011-01-01

    An electrospray device includes an electrospray emitter adapted to receive electrospray fluid; an extractor plate spaced from the electrospray emitter and having at least one aperture; and a power supply for applying a first voltage between the extractor plate and emitter for generating at least one Taylor cone emission through the aperture to create an electrospray plume from the electrospray fluid, the extractor plate as well as accelerator and shaping plates may include a porous, conductive medium for transporting and storing excess, accumulated electrospray fluid away from the aperture.

  5. Aperture shape dependencies in extended depth of focus for imaging camera by wavefront coding

    NASA Astrophysics Data System (ADS)

    Sakita, Koichi; Ohta, Mitsuhiko; Shimano, Takeshi; Sakemoto, Akito

    2015-02-01

    Optical transfer functions (OTFs) on various directional spatial frequency axes for cubic phase mask (CPM) with circular and square apertures are investigated. Although OTF has no zero points, it has a very close value to zero for a circular aperture at low frequencies on diagonal axis, which results in degradation of restored images. The reason for close-to-zero value in OTF is also analyzed in connection with point spread function profiles using Fourier slice theorem. To avoid close-to-zero condition, square aperture with CPM is indispensable in WFC. We optimized cubic coefficient α of CPM and coefficients of digital filter, and succeeded to get excellent de-blurred images at large depth of field.

  6. Experimental instrumentation system for the Phased Array Mirror Extendible Large Aperture (PAMELA) test program

    NASA Technical Reports Server (NTRS)

    Boykin, William H., Jr.

    1993-01-01

    Adaptive optics are used in telescopes for both viewing objects with minimum distortion and for transmitting laser beams with minimum beam divergence and dance. In order to test concepts on a smaller scale, NASA MSFC is in the process of setting up an adaptive optics test facility with precision (fraction of wavelengths) measurement equipment. The initial system under test is the adaptive optical telescope called PAMELA (Phased Array Mirror Extendible Large Aperture). Goals of this test are: assessment of test hardware specifications for PAMELA application and the determination of the sensitivities of instruments for measuring PAMELA (and other adaptive optical telescopes) imperfections; evaluation of the PAMELA system integration effort and test progress and recommended actions to enhance these activities; and development of concepts and prototypes of experimental apparatuses for PAMELA.

  7. Compressive Coded-Aperture Multimodal Imaging Systems

    NASA Astrophysics Data System (ADS)

    Rueda-Chacon, Hoover F.

    Multimodal imaging refers to the framework of capturing images that span different physical domains such as space, spectrum, depth, time, polarization, and others. For instance, spectral images are modeled as 3D cubes with two spatial and one spectral coordinate. Three-dimensional cubes spanning just the space domain, are referred as depth volumes. Imaging cubes varying in time, spectra or depth, are referred as 4D-images. Nature itself spans different physical domains, thus imaging our real world demands capturing information in at least 6 different domains simultaneously, giving turn to 3D-spatial+spectral+polarized dynamic sequences. Conventional imaging devices, however, can capture dynamic sequences with up-to 3 spectral channels, in real-time, by the use of color sensors. Capturing multiple spectral channels require scanning methodologies, which demand long time. In general, to-date multimodal imaging requires a sequence of different imaging sensors, placed in tandem, to simultaneously capture the different physical properties of a scene. Then, different fusion techniques are employed to mix all the individual information into a single image. Therefore, new ways to efficiently capture more than 3 spectral channels of 3D time-varying spatial information, in a single or few sensors, are of high interest. Compressive spectral imaging (CSI) is an imaging framework that seeks to optimally capture spectral imagery (tens of spectral channels of 2D spatial information), using fewer measurements than that required by traditional sensing procedures which follows the Shannon-Nyquist sampling. Instead of capturing direct one-to-one representations of natural scenes, CSI systems acquire linear random projections of the scene and then solve an optimization algorithm to estimate the 3D spatio-spectral data cube by exploiting the theory of compressive sensing (CS). To date, the coding procedure in CSI has been realized through the use of ``block-unblock" coded apertures, commonly implemented as chrome-on-quartz photomasks. These apertures block or permit to pass the entire spectrum from the scene at given spatial locations, thus modulating the spatial characteristics of the scene. In the first part, this thesis aims to expand the framework of CSI by replacing the traditional block-unblock coded apertures by patterned optical filter arrays, referred as ``color" coded apertures. These apertures are formed by tiny pixelated optical filters, which in turn, allow the input image to be modulated not only spatially but spectrally as well, entailing more powerful coding strategies. The proposed colored coded apertures are either synthesized through linear combinations of low-pass, high-pass and band-pass filters, paired with binary pattern ensembles realized by a digital-micromirror-device (DMD), or experimentally realized through thin-film color-patterned filter arrays. The optical forward model of the proposed CSI architectures will be presented along with the design and proof-of-concept implementations, which achieve noticeable improvements in the quality of the reconstructions compared with conventional block-unblock coded aperture-based CSI architectures. On another front, due to the rich information contained in the infrared spectrum as well as the depth domain, this thesis aims to explore multimodal imaging by extending the range sensitivity of current CSI systems to a dual-band visible+near-infrared spectral domain, and also, it proposes, for the first time, a new imaging device that captures simultaneously 4D data cubes (2D spatial+1D spectral+depth imaging) with as few as a single snapshot. Due to the snapshot advantage of this camera, video sequences are possible, thus enabling the joint capture of 5D imagery. It aims to create super-human sensing that will enable the perception of our world in new and exciting ways. With this, we intend to advance in the state of the art in compressive sensing systems to extract depth while accurately capturing spatial and spectral material properties. The applications of such a sensor are self-evident in fields such as computer/robotic vision because they would allow an artificial intelligence to make informed decisions about not only the location of objects within a scene but also their material properties.

  8. Self characterization of a coded aperture array for neutron source imaging

    DOE PAGES

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D. N.; ...

    2014-12-15

    The neutron imaging system at the National Ignition Facility (NIF) is an important diagnostic tool for measuring the two-dimensional size and shape of the neutrons produced in the burning DT plasma during the stagnation stage of ICF implosions. Since the neutron source is small (~100 μm) and neutrons are deeply penetrating (>3 cm) in all materials, the apertures used to achieve the desired 10-μm resolution are 20-cm long, triangular tapers machined in gold foils. These gold foils are stacked to form an array of 20 apertures for pinhole imaging and three apertures for penumbral imaging. These apertures must be preciselymore » aligned to accurately place the field of view of each aperture at the design location, or the location of the field of view for each aperture must be measured. In this paper we present a new technique that has been developed for the measurement and characterization of the precise location of each aperture in the array. We present the detailed algorithms used for this characterization and the results of reconstructed sources from inertial confinement fusion implosion experiments at NIF.« less

  9. Increasing circular synthetic aperture sonar resolution via adapted wave atoms deconvolution.

    PubMed

    Pailhas, Yan; Petillot, Yvan; Mulgrew, Bernard

    2017-04-01

    Circular Synthetic Aperture Sonar (CSAS) processing computes coherently Synthetic Aperture Sonar (SAS) data acquired along a circular trajectory. This approach has a number of advantages, in particular it maximises the aperture length of a SAS system, producing very high resolution sonar images. CSAS image reconstruction using back-projection algorithms, however, introduces a dissymmetry in the impulse response, as the imaged point moves away from the centre of the acquisition circle. This paper proposes a sampling scheme for the CSAS image reconstruction which allows every point, within the full field of view of the system, to be considered as the centre of a virtual CSAS acquisition scheme. As a direct consequence of using the proposed resampling scheme, the point spread function (PSF) is uniform for the full CSAS image. Closed form solutions for the CSAS PSF are derived analytically, both in the image and the Fourier domain. The thorough knowledge of the PSF leads naturally to the proposed adapted atom waves basis for CSAS image decomposition. The atom wave deconvolution is successfully applied to simulated data, increasing the image resolution by reducing the PSF energy leakage.

  10. Can-out hatch assembly with magnetic retention means

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Frank, R.C.; Hoh, J.C.

    1985-07-03

    A can-out hatch assembly may be positioned in sealed engagement about aperture within a chamber and is adapted to engage a cover on a container positioned over the aperture to allow the transfer of a contaminant from the chamber to the container while maintaining the contaminant as well as internal portions of the chamber and container isolated from the surrounding environment. With the container's cover engaged by the can-out hatch assembly, the hatch assembly as well as the cover may be pivotally displaced from the aperture with the cover maintaining the exterior portion of the hatch assembly isolated from themore » contaminant. After the contaminant is transferred from the chamber to the container, the hatch assembly and cover are again positioned in sealed engagement about the aperture. The hatch assembly then positions the cover upon the open end of the container in a sealed manner allowing the container to be removed while maintaining the chamber sealed relative to the surrounding environment. The can-out hatch assembly is particularly adapted for operation by remote control means within the sealed chamber.« less

  11. Can-out hatch assembly with magnetic retention means

    DOEpatents

    Frank, R.C.; Hoh, J.C.

    1985-07-03

    A can-out hatch assembly may be positioned in sealed engagement about aperture within a chamber and is adapted to engage a cover on a container positioned over the aperture to allow the transfer of a contaminant from the chamber to the container while maintaining the contaminant as well as internal portions of the chamber and container isolated from the surrounding environment. With the container's cover engaged by the can-out hatch assembly, the hatch assembly as well as the cover may be pivotally displaced from the aperture with the cover maintaining the exterior portion of the hatch assembly isolated from the contaminant. After the contaminant is transferred from the chamber to the container, the hatch assembly and cover are again positioned in sealed engagement about the aperture. The hatch assembly then positions the cover upon the open end of the container in a sealed manner allowing the container to be removed while maintaining the chamber sealed relative to the surrounding environment. The can-out hatch assembly is particularly adapted for operation by remote control means within the sealed chamber.

  12. Can-out hatch assembly with magnetic retention means

    DOEpatents

    Frank, Robert C.; Hoh, Joseph C.

    1986-01-07

    A can-out hatch assembly may be positioned in sealed engagement about an aperture within a chamber and is adapted to engage a cover on a container positioned over the aperture to allow the transfer of a contaminant from the chamber to the container while maintaining the contaminant as well as internal portions of the chamber and container isolated from the surrounding environment. With the container's cover engaged by the can-out hatch assembly, the hatch assembly as well as the cover may be pivotally displaced from the aperture with the cover maintaining the exterior portion of the hatch assembly isolated from the contaminant. After the contaminant is transferred from the chamber to the container, the hatch assembly and cover are again positioned in sealed engagement about the aperture. The hatch assembly then positions the cover upon the open end of the container in a sealed manner allowing the container to be removed while maintaining the chamber sealed relative to the surrounding environment. The can-out hatch assembly is particularly adapted for operation by remote control means within the sealed chamber.

  13. Can-out hatch assembly with magnetic retention means

    DOEpatents

    Frank, Robert C.; Hoh, Joseph C.

    1986-01-01

    A can-out hatch assembly may be positioned in sealed engagement about an aperture within a chamber and is adapted to engage a cover on a container positioned over the aperture to allow the transfer of a contaminant from the chamber to the container while maintaining the contaminant as well as internal portions of the chamber and container isolated from the surrounding environment. With the container's cover engaged by the can-out hatch assembly, the hatch assembly as well as the cover may be pivotally displaced from the aperture with the cover maintaining the exterior portion of the hatch assembly isolated from the contaminant. After the contaminant is transferred from the chamber to the container, the hatch assembly and cover are again positioned in sealed engagement about the aperture. The hatch assembly then positions the cover upon the open end of the container in a sealed manner allowing the container to be removed while maintaining the chamber sealed relative to the surrounding environment. The can-out hatch assembly is particularly adapted for operation by remote control means within the sealed chamber.

  14. Building A New Kind of Graded-Z Shield for Swift's Burst Alert Telescope

    NASA Technical Reports Server (NTRS)

    Robinson, David W.

    2002-01-01

    The Burst Alert Telescope (BAT) on Swift has a graded-Z Shield that closes out the volume between the coded aperture mask and the Cadmium-Zinc-Telluride (CZT) detector array. The purpose of the 37 kilogram shield is to attenuate gamma rays that have not penetrated the coded aperture mask of the BAT instrument and are therefore a major source of noise on the detector array. Unlike previous shields made from plates and panels, this shield consists of multiple layers of thin metal foils (lead, tantalum, tin, and copper) that are stitched together much like standard multi-layer insulation blankets. The shield sections are fastened around BAT, forming a curtain around the instrument aperture. Strength tests were performed to validate and improve the design, and the shield will be vibration tested along with BAT in late 2002. Practical aspects such as the layup design, methods of manufacture, and testing of this new kind of graded-Z Shield are presented.

  15. From Pinholes to Black Holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fenimore, Edward E.

    2014-10-06

    Pinhole photography has made major contributions to astrophysics through the use of “coded apertures”. Coded apertures were instrumental in locating gamma-ray bursts and proving that they originate in faraway galaxies, some from the birth of black holes from the first stars that formed just after the big bang.

  16. Approximated transport-of-intensity equation for coded-aperture x-ray phase-contrast imaging.

    PubMed

    Das, Mini; Liang, Zhihua

    2014-09-15

    Transport-of-intensity equations (TIEs) allow better understanding of image formation and assist in simplifying the "phase problem" associated with phase-sensitive x-ray measurements. In this Letter, we present for the first time to our knowledge a simplified form of TIE that models x-ray differential phase-contrast (DPC) imaging with coded-aperture (CA) geometry. The validity of our approximation is demonstrated through comparison with an exact TIE in numerical simulations. The relative contributions of absorption, phase, and differential phase to the acquired phase-sensitive intensity images are made readily apparent with the approximate TIE, which may prove useful for solving the inverse phase-retrieval problem associated with these CA geometry based DPC.

  17. Electromagnetic behavior of spatial terahertz wave modulators based on reconfigurable micromirror gratings in Littrow configuration.

    PubMed

    Kappa, Jan; Schmitt, Klemens M; Rahm, Marco

    2017-08-21

    Efficient, high speed spatial modulators with predictable performance are a key element in any coded aperture terahertz imaging system. For spectroscopy, the modulators must also provide a broad modulation frequency range. In this study, we numerically analyze the electromagnetic behavior of a dynamically reconfigurable spatial terahertz wave modulator based on a micromirror grating in Littrow configuration. We show that such a modulator can modulate terahertz radiation over a wide frequency range from 1.7 THz to beyond 3 THz at a modulation depth of more than 0.6. As a specific example, we numerically simulated coded aperture imaging of an object with binary transmissive properties and successfully reconstructed the image.

  18. Survey of adaptive image coding techniques

    NASA Technical Reports Server (NTRS)

    Habibi, A.

    1977-01-01

    The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.

  19. Automated interferometric synthetic aperture microscopy and computational adaptive optics for improved optical coherence tomography.

    PubMed

    Xu, Yang; Liu, Yuan-Zhi; Boppart, Stephen A; Carney, P Scott

    2016-03-10

    In this paper, we introduce an algorithm framework for the automation of interferometric synthetic aperture microscopy (ISAM). Under this framework, common processing steps such as dispersion correction, Fourier domain resampling, and computational adaptive optics aberration correction are carried out as metrics-assisted parameter search problems. We further present the results of this algorithm applied to phantom and biological tissue samples and compare with manually adjusted results. With the automated algorithm, near-optimal ISAM reconstruction can be achieved without manual adjustment. At the same time, the technical barrier for the nonexpert using ISAM imaging is also significantly lowered.

  20. Visible light high-resolution imaging system for large aperture telescope by liquid crystal adaptive optics with phase diversity technique.

    PubMed

    Xu, Zihao; Yang, Chengliang; Zhang, Peiguang; Zhang, Xingyun; Cao, Zhaoliang; Mu, Quanquan; Sun, Qiang; Xuan, Li

    2017-08-30

    There are more than eight large aperture telescopes (larger than eight meters) equipped with adaptive optics system in the world until now. Due to the limitations such as the difficulties of increasing actuator number of deformable mirror, most of them work in the infrared waveband. A novel two-step high-resolution optical imaging approach is proposed by applying phase diversity (PD) technique to the open-loop liquid crystal adaptive optics system (LC AOS) for visible light high-resolution adaptive imaging. Considering the traditional PD is not suitable for LC AOS, the novel PD strategy is proposed which can reduce the wavefront estimating error caused by non-modulated light generated by liquid crystal spatial light modulator (LC SLM) and make the residual distortions after open-loop correction to be smaller. Moreover, the LC SLM can introduce any aberration which realizes the free selection of phase diversity. The estimating errors are greatly reduced in both simulations and experiments. The resolution of the reconstructed image is greatly improved on both subjective visual effect and the highest discernible space resolution. Such technique can be widely used in large aperture telescopes for astronomical observations such as terrestrial planets, quasars and also can be used in other applications related to wavefront correction.

  1. Gamma-Ray Imaging Probes.

    NASA Astrophysics Data System (ADS)

    Wild, Walter James

    1988-12-01

    External nuclear medicine diagnostic imaging of early primary and metastatic lung cancer tumors is difficult due to the poor sensitivity and resolution of existing gamma cameras. Nonimaging counting detectors used for internal tumor detection give ambiguous results because distant background variations are difficult to discriminate from neighboring tumor sites. This suggests that an internal imaging nuclear medicine probe, particularly an esophageal probe, may be advantageously used to detect small tumors because of the ability to discriminate against background variations and the capability to get close to sites neighboring the esophagus. The design, theory of operation, preliminary bench tests, characterization of noise behavior and optimization of such an imaging probe is the central theme of this work. The central concept lies in the representation of the aperture shell by a sequence of binary digits. This, coupled with the mode of operation which is data encoding within an axial slice of space, leads to the fundamental imaging equation in which the coding operation is conveniently described by a circulant matrix operator. The coding/decoding process is a classic coded-aperture problem, and various estimators to achieve decoding are discussed. Some estimators require a priori information about the object (or object class) being imaged; the only unbiased estimator that does not impose this requirement is the simple inverse-matrix operator. The effects of noise on the estimate (or reconstruction) is discussed for general noise models and various codes/decoding operators. The choice of an optimal aperture for detector count times of clinical relevance is examined using a statistical class-separability formalism.

  2. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  3. X-ray microlaminography with polycapillary optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dabrowski, K. M.; Dul, D. T.; Wrobel, A.

    2013-06-03

    We demonstrate layer-by-layer x-ray microimaging using polycapillary optics. The depth resolution is achieved without sample or source rotation and in a way similar to classical tomography or laminography. The method takes advantage from large angular apertures of polycapillary optics and from their specific microstructure, which is treated as a coded aperture. The imaging geometry is compatible with polychromatic x-ray sources and with scanning and confocal x-ray fluorescence setups.

  4. CAFNA{reg{underscore}sign}, coded aperture fast neutron analysis for contraband detection: Preliminary results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, L.; Lanza, R.C.

    1999-12-01

    The authors have developed a near field coded aperture imaging system for use with fast neutron techniques as a tool for the detection of contraband and hidden explosives through nuclear elemental analysis. The technique relies on the prompt gamma rays produced by fast neutron interactions with the object being examined. The position of the nuclear elements is determined by the location of the gamma emitters. For existing fast neutron techniques, in Pulsed Fast Neutron Analysis (PFNA), neutrons are used with very low efficiency; in Fast Neutron Analysis (FNS), the sensitivity for detection of the signature gamma rays is very low.more » For the Coded Aperture Fast Neutron Analysis (CAFNA{reg{underscore}sign}) the authors have developed, the efficiency for both using the probing fast neutrons and detecting the prompt gamma rays is high. For a probed volume of n{sup 3} volume elements (voxels) in a cube of n resolution elements on a side, they can compare the sensitivity with other neutron probing techniques. As compared to PFNA, the improvement for neutron utilization is n{sup 2}, where the total number of voxels in the object being examined is n{sup 3}. Compared to FNA, the improvement for gamma-ray imaging is proportional to the total open area of the coded aperture plane; a typical value is n{sup 2}/2, where n{sup 2} is the number of total detector resolution elements or the number of pixels in an object layer. It should be noted that the actual signal to noise ratio of a system depends also on the nature and distribution of background events and this comparison may reduce somewhat the effective sensitivity of CAFNA. They have performed analysis, Monte Carlo simulations, and preliminary experiments using low and high energy gamma-ray sources. The results show that a high sensitivity 3-D contraband imaging and detection system can be realized by using CAFNA.« less

  5. Experimental Hydromechanical Characterization and Numerical Modelling of a Fractured and Porous Sandstone

    NASA Astrophysics Data System (ADS)

    Souley, Mountaka; Lopez, Philippe; Boulon, Marc; Thoraval, Alain

    2015-05-01

    The experimental device previously used to study the hydromechanical behaviour of individual fractures on a laboratory scale, was adapted to make it possible to measure flow through porous rock mass samples in addition to fracture flows. A first series of tests was performed to characterize the hydromechanical behaviour of the fracture individually as well as the porous matrix (sandstone) comprising the fracture walls. A third test in this series was used to validate the experimental approach. These tests showed non-linear evolution of the contact area on the fracture walls with respect to effective normal stress. Consequently, a non-linear relationship was noted between the hydraulic aperture on the one hand, and the effective normal stress and mechanical opening on the other hand. The results of the three tests were then analysed by numerical modelling. The VIPLEF/HYDREF numerical codes used take into account the dual-porosity of the sample (fracture + rock matrix) and can be used to reproduce hydromechanical loading accurately. The analyses show that the relationship between the hydraulic aperture of the fracture and the mechanical closure has a significant effect on fracture flow rate predictions. By taking simultaneous measurements of flow in both fracture and rock matrix, we were able to carry out a global evaluation of the conceptual approach used.

  6. Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration

    NASA Astrophysics Data System (ADS)

    Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan

    2017-12-01

    As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.

  7. Reconfigurable mask for adaptive coded aperture imaging (ACAI) based on an addressable MOEMS microshutter array

    NASA Astrophysics Data System (ADS)

    McNie, Mark E.; Combes, David J.; Smith, Gilbert W.; Price, Nicola; Ridley, Kevin D.; Brunson, Kevin M.; Lewis, Keith L.; Slinger, Chris W.; Rogers, Stanley

    2007-09-01

    Coded aperture imaging has been used for astronomical applications for several years. Typical implementations use a fixed mask pattern and are designed to operate in the X-Ray or gamma ray bands. More recent applications have emerged in the visible and infra red bands for low cost lens-less imaging systems. System studies have shown that considerable advantages in image resolution may accrue from the use of multiple different images of the same scene - requiring a reconfigurable mask. We report on work to develop a novel, reconfigurable mask based on micro-opto-electro-mechanical systems (MOEMS) technology employing interference effects to modulate incident light in the mid-IR band (3-5μm). This is achieved by tuning a large array of asymmetric Fabry-Perot cavities by applying an electrostatic force to adjust the gap between a moveable upper polysilicon mirror plate supported on suspensions and underlying fixed (electrode) layers on a silicon substrate. A key advantage of the modulator technology developed is that it is transmissive and high speed (e.g. 100kHz) - allowing simpler imaging system configurations. It is also realised using a modified standard polysilicon surface micromachining process (i.e. MUMPS-like) that is widely available and hence should have a low production cost in volume. We have developed designs capable of operating across the entire mid-IR band with peak transmissions approaching 100% and high contrast. By using a pixelated array of small mirrors, a large area device comprising individually addressable elements may be realised that allows reconfiguring of the whole mask at speeds in excess of video frame rates.

  8. Toward Adaptive X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    O'Dell, Stephen L.; Atkins, Carolyn; Button, Tim W.; Cotroneo, Vincenzo; Davis, William N.; Doel, Peer; Feldman, Charlotte H.; Freeman, Mark D.; Gubarev, Mikhail V.; Kolodziejczak, Jeffrey J.; hide

    2011-01-01

    Future x-ray observatories will require high-resolution (less than 1 inch) optics with very-large-aperture (greater than 25 square meter) areas. Even with the next generation of heavy-lift launch vehicles, launch-mass constraints and aperture-area requirements will limit the surface areal density of the grazing-incidence mirrors to about 1 kilogram per square meter or less. Achieving sub-arcsecond x-ray imaging with such lightweight mirrors will require excellent mirror surfaces, precise and stable alignment, and exceptional stiffness or deformation compensation. Attaining and maintaining alignment and figure control will likely involve adaptive (in-space adjustable) x-ray optics. In contrast with infrared and visible astronomy, adaptive optics for x-ray astronomy is in its infancy. In the middle of the past decade, two efforts began to advance technologies for adaptive x-ray telescopes: The Generation-X (Gen-X) concept studies in the United States, and the Smart X-ray Optics (SXO) Basic Technology project in the United Kingdom. This paper discusses relevant technological issues and summarizes progress toward adaptive x-ray telescopes.

  9. Molecular imaging with radionuclides, a powerful technique for studying biological processes in vivo

    NASA Astrophysics Data System (ADS)

    Cisbani, E.; Cusanno, F.; Garibaldi, F.; Magliozzi, M. L.; Majewski, S.; Torrioli, S.; Tsui, B. M. W.

    2007-02-01

    Our team is carrying on a systematic study devoted to the design of a SPECT detector with submillimeter resolution and adequate sensitivity (1 cps/kBq). Such system will be used for functional imaging of biological processes at molecular level in small animal. The system requirements have been defined by two relevant applications: study of atherosclerotic plaques characterization and stem cells diffusion and homing. In order to minimize costs and implementation time, the gamma detector will be based—as much as possible—on conventional components: scintillator crystal and position sensitive PhotoMultipliers read by individual channel electronics. A coded aperture collimator should be adapted to maximize the efficiency. The optimal selection of the detector components is investigated by systematic use of Monte-Carlo simulations (and laboratory validation tests); and finally preliminary results are presented and discussed here.

  10. Optical antenna for a visible light communications receiver

    NASA Astrophysics Data System (ADS)

    Valencia-Estrada, Juan Camilo; García-Márquez, Jorge; Topsu, Suat; Chassagne, Luc

    2018-01-01

    Visible Light Communications (VLC) receivers adapted to be used in high transmission rates will eventually use either, high aperture lenses or non-linear optical elements capable of converting light arriving to the receiver into an electric signal. The high aperture lens case, reveals a challenge from an optical designers point-of-view. As a matter of fact, the lens must collect a wide aperture intensity flux using a limited aperture as its use is intended to portable devices. This last also limits both, lens thickness and its focal length. Here, we show a first design to be adapted to a VLC receiver that take these constraints into account. This paper describes a method to design catadioptric and monolithic lenses to be used as an optical collector of light entering from a near point light source as a spherical fan L with a wide acceptance angle α° and high efficiency. These lenses can be mass produced and therefore one can find many practical applications in VLC equipped devices. We show a first design for a near light source without magnification, and second one with a detector's magnification in a meridional section. We utilize rigorous geometric optics, vector analysis and ordinary differential equations.

  11. A precise method for adjusting the optical system of laser sub-aperture

    NASA Astrophysics Data System (ADS)

    Song, Xing; Zhang, Xue-min; Yang, Jianfeng; Xue, Li

    2018-02-01

    In order to adapt to the requirement of modern astronomical observation and warfare, the resolution of the space telescope is needed to improve, sub-aperture stitching imaging technique is one method to improve the resolution, which could be used not only the foundation and space-based large optical systems, also used in laser transmission and microscopic imaging. A large aperture main mirror of sub-aperture stitching imaging system is composed of multiple sub-mirrors distributed according to certain laws. All sub-mirrors are off-axis mirror, so the alignment of sub-aperture stitching imaging system is more complicated than a single off-axis optical system. An alignment method based on auto-collimation imaging and interferometric imaging is introduced in this paper, by using this alignment method, a sub-aperture stitching imaging system which is composed of 12 sub-mirrors was assembled with high resolution, the beam coincidence precision is better than 0.01mm, and the system wave aberration is better than 0.05λ.

  12. Solar energy receiver for a Stirling engine

    NASA Technical Reports Server (NTRS)

    Selcuk, M. K. (Inventor)

    1980-01-01

    A solar energy receiver includes a separable endless wall formed of a ceramic material in which a cavity of substantially cylindrical configuration is defined for entrapping solar flux. An acceptance aperture is adapted to admit to the cavity a concentrated beam of solar energy. The wall is characterized by at least one pair of contiguously related segments separated by lines of cleavage intercepting the aperture. At least one of the segments is supported for pivotal displacement. A thermal-responsive actuator is adapted to respond to excessive temperatures within the cavity for initiating pivoted displacement of one segment, whereby thermal flux is permitted to escape from the cavity.

  13. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  14. Enhanced retinal vasculature imaging with a rapidly configurable aperture

    PubMed Central

    Sapoznik, Kaitlyn A.; Luo, Ting; de Castro, Alberto; Sawides, Lucie; Warner, Raymond L.; Burns, Stephen A.

    2018-01-01

    In adaptive optics scanning laser ophthalmoscope (AOSLO) systems, capturing multiply scattered light can increase the contrast of the retinal microvasculature structure, cone inner segments, and retinal ganglion cells. Current systems generally use either a split detector or offset aperture approach to collect this light. We tested the ability of a spatial light modulator (SLM) as a rapidly configurable aperture to use more complex shapes to enhance the contrast of retinal structure. Particularly, we varied the orientation of a split detector aperture and explored the use of a more complex shape, the half annulus, to enhance the contrast of the retinal vasculature. We used the new approach to investigate the influence of scattering distance and orientation on vascular imaging. PMID:29541524

  15. Effects of object shape on the visual guidance of action.

    PubMed

    Eloka, Owino; Franz, Volker H

    2011-04-22

    Little is known of how visual coding of the shape of an object affects grasping movements. We addressed this issue by investigating the influence of shape perturbations on grasping. Twenty-six participants grasped a disc or a bar that were chosen such that they could in principle be grasped with identical movements (i.e., relevant sizes were identical such that the final grips consisted of identical separations of the fingers and no parts of the objects constituted obstacles for the movement). Nevertheless, participants took object shape into account and grasped the bar with a larger maximum grip aperture and a different hand angle than the disc. In 20% of the trials, the object changed its shape from bar to disc or vice versa early or late during the movement. If there was enough time (early perturbations), grasps were often adapted in flight to the new shape. These results show that the motor system takes into account even small and seemingly irrelevant changes of object shape and adapts the movement in a fine-grained manner. Although this adaptation might seem computationally expensive, we presume that its benefits (e.g., a more comfortable and more accurate movement) outweigh the costs. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Determination of the paraxial focal length using Zernike polynomials over different apertures

    NASA Astrophysics Data System (ADS)

    Binkele, Tobias; Hilbig, David; Henning, Thomas; Fleischmann, Friedrich

    2017-02-01

    The paraxial focal length is still the most important parameter in the design of a lens. As presented at the SPIE Optics + Photonics 2016, the measured focal length is a function of the aperture. The paraxial focal length can be found when the aperture approaches zero. In this work, we investigate the dependency of the Zernike polynomials on the aperture size with respect to 3D space. By this, conventional wavefront measurement systems that apply Zernike polynomial fitting (e.g. Shack-Hartmann-Sensor) can be used to determine the paraxial focal length, too. Since the Zernike polynomials are orthogonal over a unit circle, the aperture used in the measurement has to be normalized. By shrinking the aperture and keeping up with the normalization, the Zernike coefficients change. The relation between these changes and the paraxial focal length are investigated. The dependency of the focal length on the aperture size is derived analytically and evaluated by simulation and measurement of a strong focusing lens. The measurements are performed using experimental ray tracing and a Shack-Hartmann-Sensor. Using experimental ray tracing for the measurements, the aperture can be chosen easily. Regarding the measurements with the Shack-Hartmann- Sensor, the aperture size is fixed. Thus, the Zernike polynomials have to be adapted to use different aperture sizes by the proposed method. By doing this, the paraxial focal length can be determined from the measurements in both cases.

  17. Diffraction Analysis of Antennas With Mesh Surfaces

    NASA Technical Reports Server (NTRS)

    Rahmat-Samii, Yahya

    1987-01-01

    Strip-aperture model replaces wire-grid model. Far-field radiation pattern of antenna with mesh reflector calculated more accurately with new strip-aperture model than with wire-grid model of reflector surface. More adaptable than wire-grid model to variety of practical configurations and decidedly superior for reflectors in which mesh-cell width exceeds mesh thickness. Satisfies reciprocity theorem. Applied where mesh cells are no larger than tenth of wavelength. Small cell size permits use of simplifying approximation that reflector-surface current induced by electromagnetic field is present even in apertures. Approximation useful in calculating far field.

  18. Fabrication of the pinhole aperture for AdaptiSPECT

    PubMed Central

    Kovalsky, Stephen; Kupinski, Matthew A.; Barrett, Harrison H.; Furenlid, Lars R.

    2015-01-01

    AdaptiSPECT is a pre-clinical pinhole SPECT imaging system under final construction at the Center for Gamma-Ray Imaging. The system is designed to be able to autonomously change its imaging configuration. The system comprises 16 detectors mounted on translational stages to move radially away and towards the center of the field-of-view. The system also possesses an adaptive pinhole aperture with multiple collimator diameters and pinhole sizes, as well as the possibility to switch between multiplexed and non-multiplexed imaging configurations. In this paper, we describe the fabrication of the AdaptiSPECT pinhole aperture and its controllers. PMID:26146443

  19. Regolith X-Ray Imaging Spectrometer (REXIS) Aboard the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Masterson, R. A.; Chodas, M.; Bayley, L.; Allen, B.; Hong, J.; Biswas, P.; McMenamin, C.; Stout, K.; Bokhour, E.; Bralower, H.; Carte, D.; Chen, S.; Jones, M.; Kissel, S.; Schmidt, F.; Smith, M.; Sondecker, G.; Lim, L. F.; Lauretta, D. S.; Grindlay, J. E.; Binzel, R. P.

    2018-02-01

    The Regolith X-ray Imaging Spectrometer (REXIS) is the student collaboration experiment proposed and built by an MIT-Harvard team, launched aboard NASA's OSIRIS-REx asteroid sample return mission. REXIS complements the scientific investigations of other OSIRIS-REx instruments by determining the relative abundances of key elements present on the asteroid's surface by measuring the X-ray fluorescence spectrum (stimulated by the natural solar X-ray flux) over the range of energies 0.5 to 7 keV. REXIS consists of two components: a main imaging spectrometer with a coded aperture mask and a separate solar X-ray monitor to account for the Sun's variability. In addition to element abundance ratios (relative to Si) pinpointing the asteroid's most likely meteorite association, REXIS also maps elemental abundance variability across the asteroid's surface using the asteroid's rotation as well as the spacecraft's orbital motion. Image reconstruction at the highest resolution is facilitated by the coded aperture mask. Through this operation, REXIS will be the first application of X-ray coded aperture imaging to planetary surface mapping, making this student-built instrument a pathfinder toward future planetary exploration. To date, 60 students at the undergraduate and graduate levels have been involved with the REXIS project, with the hands-on experience translating to a dozen Master's and Ph.D. theses and other student publications.

  20. Aero-Optics Code Development: Experimental Databases and AVUS Code Improvements

    DTIC Science & Technology

    2009-03-01

    direction, helped predict accurate Strouhal number. 62 5. References [1] Siegenthaler, J., Gordeyev , S., and Jumper , E., “Shear Layers and Aperture...approach . . . . . . . . . . . . . . . . . 44 55 Grid used for the transonic flow past NACA0012 airfoil . . . . . . . . . . . . . . . . . . . . . 46 56...layer problem (Configuration II) . . . . . . . . . . . . . . . . 60 vi Acknowledgements The author would like to acknowledge Drs. Eric Jumper and

  1. A novel data processing technique for image reconstruction of penumbral imaging

    NASA Astrophysics Data System (ADS)

    Xie, Hongwei; Li, Hongyun; Xu, Zeping; Song, Guzhou; Zhang, Faqiang; Zhou, Lin

    2011-06-01

    CT image reconstruction technique was applied to the data processing of the penumbral imaging. Compared with other traditional processing techniques for penumbral coded pinhole image such as Wiener, Lucy-Richardson and blind technique, this approach is brand new. In this method, the coded aperture processing method was used for the first time independent to the point spread function of the image diagnostic system. In this way, the technical obstacles was overcome in the traditional coded pinhole image processing caused by the uncertainty of point spread function of the image diagnostic system. Then based on the theoretical study, the simulation of penumbral imaging and image reconstruction was carried out to provide fairly good results. While in the visible light experiment, the point source of light was used to irradiate a 5mm×5mm object after diffuse scattering and volume scattering. The penumbral imaging was made with aperture size of ~20mm. Finally, the CT image reconstruction technique was used for image reconstruction to provide a fairly good reconstruction result.

  2. NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems

    NASA Technical Reports Server (NTRS)

    Quadrelli, Marco B.; Basinger, Scott; Arumugam, Darmindra; Swartzlander, Grover

    2017-01-01

    Inspired by the light scattering and focusing properties of distributed optical assemblies in Nature, such as rainbows and aerosols, and by recent laboratory successes in optical trapping and manipulation, we propose a unique combination of space optics and autonomous robotic system technology, to enable a new vision of space system architecture with applications to ultra-lightweight space optics and, ultimately, in-situ space system fabrication. Typically, the cost of an optical system is driven by the size and mass of the primary aperture. The ideal system is a cloud of spatially disordered dust-like objects that can be optically manipulated: it is highly reconfigurable, fault-tolerant, and allows very large aperture sizes at low cost. This new concept is based on recent understandings in the physics of optical manipulation of small particles in the laboratory and the engineering of distributed ensembles of spacecraft swarms to shape an orbiting cloud of micron-sized objects. In the same way that optical tweezers have revolutionized micro- and nano-manipulation of objects, our breakthrough concept will enable new large scale NASA mission applications and develop new technology in the areas of Astrophysical Imaging Systems and Remote Sensing because the cloud can operate as an adaptive optical imaging sensor. While achieving the feasibility of constructing one single aperture out of the cloud is the main topic of this work, it is clear that multiple orbiting aerosol lenses could also combine their power to synthesize a much larger aperture in space to enable challenging goals such as exo-planet detection. Furthermore, this effort could establish feasibility of key issues related to material properties, remote manipulation, and autonomy characteristics of cloud in orbit. There are several types of endeavors (science missions) that could be enabled by this type of approach, i.e. it can enable new astrophysical imaging systems, exo-planet search, large apertures allow for unprecedented high resolution to discern continents and important features of other planets, hyperspectral imaging, adaptive systems, spectroscopy imaging through limb, and stable optical systems from Lagrange-points. Furthermore, future micro-miniaturization might hold promise of a further extension of our dust aperture concept to other more exciting smart dust concepts with other associated capabilities. Our objective in Phase II was to experimentally and numerically investigate how to optically manipulate and maintain the shape of an orbiting cloud of dust-like matter so that it can function as an adaptable ultra-lightweight surface. Our solution is based on the aperture being an engineered granular medium, instead of a conventional monolithic aperture. This allows building of apertures at a reduced cost, enables extremely fault-tolerant apertures that cannot otherwise be made, and directly enables classes of missions for exoplanet detection based on Fourier spectroscopy with tight angular resolution and innovative radar systems for remote sensing. In this task, we have examined the advanced feasibility of a crosscutting concept that contributes new technological approaches for space imaging systems, autonomous systems, and space applications of optical manipulation. The proposed investigation has matured the concept that we started in Phase I to TRL 3, identifying technology gaps and candidate system architectures for the space-borne cloud as an aperture.

  3. Overlapped Fourier coding for optical aberration removal

    PubMed Central

    Horstmeyer, Roarke; Ou, Xiaoze; Chung, Jaebum; Zheng, Guoan; Yang, Changhuei

    2014-01-01

    We present an imaging procedure that simultaneously optimizes a camera’s resolution and retrieves a sample’s phase over a sequence of snapshots. The technique, termed overlapped Fourier coding (OFC), first digitally pans a small aperture across a camera’s pupil plane with a spatial light modulator. At each aperture location, a unique image is acquired. The OFC algorithm then fuses these low-resolution images into a full-resolution estimate of the complex optical field incident upon the detector. Simultaneously, the algorithm utilizes redundancies within the acquired dataset to computationally estimate and remove unknown optical aberrations and system misalignments via simulated annealing. The result is an imaging system that can computationally overcome its optical imperfections to offer enhanced resolution, at the expense of taking multiple snapshots over time. PMID:25321982

  4. Generation of topologically diverse acoustic vortex beams using a compact metamaterial aperture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naify, Christina J., E-mail: christina.naify@nrl.navy.mil; Rohde, Charles A.; Martin, Theodore P.

    2016-05-30

    Here, we present a class of metamaterial-based acoustic vortex generators which are both geometrically simple and broadly tunable. The aperture overcomes the significant limitations of both active phasing systems and existing passive coded apertures. The metamaterial approach generates topologically diverse acoustic vortex waves motivated by recent advances in leaky wave antennas by wrapping the antenna back upon itself to produce an acoustic vortex wave antenna. We demonstrate both experimentally and analytically that this single analog structure is capable of creating multiple orthogonal orbital angular momentum modes using only a single transducer. The metamaterial design makes the aperture compact, with amore » diameter nearly equal to the excitation wavelength and can thus be easily integrated into high-density systems. Applications range from acoustic communications for high bit-rate multiplexing to biomedical devices such as microfluidic mixers.« less

  5. Phased Array Mirror Extendible Large Aperture (PAMELA) Optics Adjustment

    NASA Technical Reports Server (NTRS)

    1995-01-01

    Scientists at Marshall's Adaptive Optics Lab demonstrate the Wave Front Sensor alignment using the Phased Array Mirror Extendible Large Aperture (PAMELA) optics adjustment. The primary objective of the PAMELA project is to develop methods for aligning and controlling adaptive optics segmented mirror systems. These systems can be used to acquire or project light energy. The Next Generation Space Telescope is an example of an energy acquisition system that will employ segmented mirrors. Light projection systems can also be used for power beaming and orbital debris removal. All segmented optical systems must be adjusted to provide maximum performance. PAMELA is an on going project that NASA is utilizing to investigate various methods for maximizing system performance.

  6. Simultaneous displacement and slope measurement in electronic speckle pattern interferometry using adjustable aperture multiplexing.

    PubMed

    Lu, Min; Wang, Shengjia; Aulbach, Laura; Koch, Alexander W

    2016-08-01

    This paper suggests the use of adjustable aperture multiplexing (AAM), a method which is able to introduce multiple tunable carrier frequencies into a three-beam electronic speckle pattern interferometer to measure the out-of-plane displacement and its first-order derivative simultaneously. In the optical arrangement, two single apertures are located in the object and reference light paths, respectively. In cooperation with two adjustable mirrors, virtual images of the single apertures construct three pairs of virtual double apertures with variable aperture opening sizes and aperture distances. By setting the aperture parameter properly, three tunable spatial carrier frequencies are produced within the speckle pattern and completely separate the information of three interferograms in the frequency domain. By applying the inverse Fourier transform to a selected spectrum, its corresponding phase difference distribution can thus be evaluated. Therefore, we can obtain the phase map due to the deformation as well as its slope of the test surface from two speckle patterns which are recorded at different loading events. By this means, simultaneous and dynamic measurements are realized. AAM has greatly simplified the measurement system, which contributes to improving the system stability and increasing the system flexibility and adaptability to various measurement requirements. This paper presents the AAM working principle, the phase retrieval using spatial carrier frequency, and preliminary experimental results.

  7. Electromagnetic Field Penetration Studies

    NASA Technical Reports Server (NTRS)

    Deshpande, M.D.

    2000-01-01

    A numerical method is presented to determine electromagnetic shielding effectiveness of rectangular enclosure with apertures on its wall used for input and output connections, control panels, visual-access windows, ventilation panels, etc. Expressing EM fields in terms of cavity Green's function inside the enclosure and the free space Green's function outside the enclosure, integral equations with aperture tangential electric fields as unknown variables are obtained by enforcing the continuity of tangential electric and magnetic fields across the apertures. Using the Method of Moments, the integral equations are solved for unknown aperture fields. From these aperture fields, the EM field inside a rectangular enclosure due to external electromagnetic sources are determined. Numerical results on electric field shielding of a rectangular cavity with a thin rectangular slot obtained using the present method are compared with the results obtained using simple transmission line technique for code validation. The present technique is applied to determine field penetration inside a Boeing-757 by approximating its passenger cabin as a rectangular cavity filled with a homogeneous medium and its passenger windows by rectangular apertures. Preliminary results for, two windows, one on each side of fuselage were considered. Numerical results for Boeing-757 at frequencies 26 MHz, 171-175 MHz, and 428-432 MHz are presented.

  8. A comparative study of SAR data compression schemes

    NASA Technical Reports Server (NTRS)

    Lambert-Nebout, C.; Besson, O.; Massonnet, D.; Rogron, B.

    1994-01-01

    The amount of data collected from spaceborne remote sensing has substantially increased in the last years. During same time period, the ability to store or transmit data has not increased as quickly. At this time, there is a growing interest in developing compression schemes that could provide both higher compression ratios and lower encoding/decoding errors. In the case of the spaceborne Synthetic Aperture Radar (SAR) earth observation system developed by the French Space Agency (CNES), the volume of data to be processed will exceed both the on-board storage capacities and the telecommunication link. The objective of this paper is twofold: to present various compression schemes adapted to SAR data; and to define a set of evaluation criteria and compare the algorithms on SAR data. In this paper, we review two classical methods of SAR data compression and propose novel approaches based on Fourier Transforms and spectrum coding.

  9. WE-G-BRF-01: Adaptation to Intrafraction Tumor Deformation During Intensity-Modulated Radiotherapy: First Proof-Of-Principle Demonstration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Y; OBrien, R; Shieh, C

    2014-06-15

    Purpose: Intrafraction tumor deformation limits targeting accuracy in radiotherapy and cannot be adapted to by current motion management techniques. This study simulated intrafractional treatment adaptation to tumor deformations using a dynamic Multi-Leaf Collimator (DMLC) tracking system during Intensity-modulated radiation therapy (IMRT) treatment for the first time. Methods: The DMLC tracking system was developed to adapt to the intrafraction tumor deformation by warping the planned beam aperture guided by the calculated deformation vector field (DVF) obtained from deformable image registration (DIR) at the time of treatment delivery. Seven single phantom deformation images up to 10.4 mm deformation and eight tumor systemmore » phantom deformation images up to 21.5 mm deformation were acquired and used in tracking simulation. The intrafraction adaptation was simulated at the DMLC tracking software platform, which was able to communicate with the image registration software, reshape the instantaneous IMRT field aperture and log the delivered MLC fields.The deformation adaptation accuracy was evaluated by a geometric target coverage metric defined as the sum of the area incorrectly outside and inside the reference aperture. The incremental deformations were arbitrarily determined to take place equally over the delivery interval. The geometric target coverage of delivery with deformation adaptation was compared against the delivery without adaptation. Results: Intrafraction deformation adaptation during dynamic IMRT plan delivery was simulated for single and system deformable phantoms. For the two particular delivery situations, over the treatment course, deformation adaptation improved the target coverage by 89% for single target deformation and 79% for tumor system deformation compared with no-tracking delivery. Conclusion: This work demonstrated the principle of real-time tumor deformation tracking using a DMLC. This is the first step towards the development of an image-guided radiotherapy system to treat deforming tumors in real-time. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship, Cure Cancer Australia Foundation, NHMRC Project Grant APP1042375 and US NIH/NCI R01CA93626.« less

  10. Correction of respiratory motion for IMRT using aperture adaptive technique and visual guidance: A feasibility study

    NASA Astrophysics Data System (ADS)

    Chen, Ho-Hsing; Wu, Jay; Chuang, Keh-Shih; Kuo, Hsiang-Chi

    2007-07-01

    Intensity-modulated radiation therapy (IMRT) utilizes nonuniform beam profile to deliver precise radiation doses to a tumor while minimizing radiation exposure to surrounding normal tissues. However, the problem of intrafraction organ motion distorts the dose distribution and leads to significant dosimetric errors. In this research, we applied an aperture adaptive technique with a visual guiding system to toggle the problem of respiratory motion. A homemade computer program showing a cyclic moving pattern was projected onto the ceiling to visually help patients adjust their respiratory patterns. Once the respiratory motion becomes regular, the leaf sequence can be synchronized with the target motion. An oscillator was employed to simulate the patient's breathing pattern. Two simple fields and one IMRT field were measured to verify the accuracy. Preliminary results showed that after appropriate training, the amplitude and duration of volunteer's breathing can be well controlled by the visual guiding system. The sharp dose gradient at the edge of the radiation fields was successfully restored. The maximum dosimetric error in the IMRT field was significantly decreased from 63% to 3%. We conclude that the aperture adaptive technique with the visual guiding system can be an inexpensive and feasible alternative without compromising delivery efficiency in clinical practice.

  11. Measurement of seeing and the atmospheric time constant by differential scintillations.

    PubMed

    Tokovinin, Andrei

    2002-02-20

    A simple differential analysis of stellar scintillations measured simultaneously with two apertures opens the possibility to estimate seeing. Moreover, some information on the vertical turbulence distribution can be obtained. A general expression for the differential scintillation index for apertures of arbitrary shape and for finite exposure time is derived, and its applications are studied. Correction for exposure time bias by use of the ratio of scintillation indices with and without time binning is studied. A bandpass-filtered scintillation in a small aperture (computed as the differential-exposure index) provides a reasonably good estimate of the atmospheric time constant for adaptive optics.

  12. ION SOURCE

    DOEpatents

    Martina, E.F.

    1958-04-22

    An improved ion source particularly adapted to provide an intense beam of ions with minimum neutral molecule egress from the source is described. The ion source structure includes means for establishing an oscillating electron discharge, including an apertured cathode at one end of the discharge. The egress of ions from the source is in a pencil like beam. This desirable form of withdrawal of the ions from the plasma created by the discharge is achieved by shaping the field at the aperture of the cathode. A tubular insulator is extended into the plasma from the aperture and in cooperation with the electric fields at the cathode end of the discharge focuses the ions from the source,

  13. SU-E-J-127: Implementation of An Online Replanning Tool for VMAT Using Flattening Filter-Free Beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ates, O; Ahunbay, E; Li, X

    2015-06-15

    Purpose: This is to report the implementation of an online replanning tool based on segment aperture morphing (SAM) for VMAT with flattening filter free (FFF) beams. Methods: Previously reported SAM algorithm modified to accommodate VMAT with FFF beams was implemented in a tool that was interfaced with a treatment planning system (Monaco, Elekta). The tool allows (1) to output the beam parameters of the original VMAT plan from Monaco, and (2) to input the apertures generated from the SAM algorithm into Monaco for the dose calculation on daily CT/CBCT/MRI in the following steps:(1) Quickly generating target contour based on themore » image of the day, using an auto-segmentation tool (ADMIRE, Elekta) with manual editing if necessary; (2) Morphing apertures based on the SAM in the original VMAT plan to account for the interfractional change of the target from the planning to the daily images; (3) Calculating dose distribution for new apertures with the same numbers of MU as in the original plan; (4) Transferring the new plan into a record & verify system (MOSAIQ, Elekta); (5) Performing a pre-delivery QA based on software; (6) Delivering the adaptive plan for the fraction.This workflow was implemented on a 16-CPU (2.6 GHz dual-core) hardware with GPU and was tested for sample cases of prostate, pancreas and lung tumors. Results: The online replanning process can be completed within 10 minutes. The adaptive plans generally have improved the plan quality when compared to the IGRT repositioning plans. The adaptive plans with FFF beams have better normal tissue sparing as compared with those of FF beams. Conclusion: The online replanning tool based on SAM can quickly generate adaptive VMAT plans using FFF beams with improved plan quality than those from the IGRT repositioning plans based on daily CT/CBCT/MRI and can be used clinically. This research was supported by Elekta Inc. (Crawley, UK)« less

  14. Irradiation of the prostate and pelvic lymph nodes with an adaptive algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hwang, A. B.; Chen, J.; Nguyen, T. B.

    2012-02-15

    Purpose: The simultaneous treatment of pelvic lymph nodes and the prostate in radiotherapy for prostate cancer is complicated by the independent motion of these two target volumes. In this work, the authors study a method to adapt intensity modulated radiation therapy (IMRT) treatment plans so as to compensate for this motion by adaptively morphing the multileaf collimator apertures and adjusting the segment weights. Methods: The study used CT images, tumor volumes, and normal tissue contours from patients treated in our institution. An IMRT treatment plan was then created using direct aperture optimization to deliver 45 Gy to the pelvic lymphmore » nodes and 50 Gy to the prostate and seminal vesicles. The prostate target volume was then shifted in either the anterior-posterior direction or in the superior-inferior direction. The treatment plan was adapted by adjusting the aperture shapes with or without re-optimizing the segment weighting. The dose to the target volumes was then determined for the adapted plan. Results: Without compensation for prostate motion, 1 cm shifts of the prostate resulted in an average decrease of 14% in D-95%. If the isocenter is simply shifted to match the prostate motion, the prostate receives the correct dose but the pelvic lymph nodes are underdosed by 14% {+-} 6%. The use of adaptive morphing (with or without segment weight optimization) reduces the average change in D-95% to less than 5% for both the pelvic lymph nodes and the prostate. Conclusions: Adaptive morphing with and without segment weight optimization can be used to compensate for the independent motion of the prostate and lymph nodes when combined with daily imaging or other methods to track the prostate motion. This method allows the delivery of the correct dose to both the prostate and lymph nodes with only small changes to the dose delivered to the target volumes.« less

  15. Coded-Aperture X- or gamma -ray telescope with Least- squares image reconstruction. III. Data acquisition and analysis enhancements

    NASA Astrophysics Data System (ADS)

    Kohman, T. P.

    1995-05-01

    The design of a cosmic X- or gamma -ray telescope with least- squares image reconstruction and its simulated operation have been described (Rev. Sci. Instrum. 60, 3396 and 3410 (1989)). Use of an auxiliary open aperture ("limiter") ahead of the coded aperture limits the object field to fewer pixels than detector elements, permitting least-squares reconstruction with improved accuracy in the imaged field; it also yields a uniformly sensitive ("flat") central field. The design has been enhanced to provide for mask-antimask operation. This cancels and eliminates uncertainties in the detector background, and the simulated results have virtually the same statistical accuracy (pixel-by-pixel output-input RMSD) as with a single mask alone. The simulations have been made more realistic by incorporating instrumental blurring of sources. A second-stage least-squares procedure had been developed to determine the precise positions and total fluxes of point sources responsible for clusters of above-background pixels in the field resulting from the first-stage reconstruction. Another program converts source positions in the image plane to celestial coordinates and vice versa, the image being a gnomic projection of a region of the sky.

  16. Multimode imaging device

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M

    2013-08-27

    Apparatus for detecting and locating a source of gamma rays of energies ranging from 10-20 keV to several MeV's includes plural gamma ray detectors arranged in a generally closed extended array so as to provide Compton scattering imaging and coded aperture imaging simultaneously. First detectors are arranged in a spaced manner about a surface defining the closed extended array which may be in the form a circle, a sphere, a square, a pentagon or higher order polygon. Some of the gamma rays are absorbed by the first detectors closest to the gamma source in Compton scattering, while the photons that go unabsorbed by passing through gaps disposed between adjacent first detectors are incident upon second detectors disposed on the side farthest from the gamma ray source, where the first spaced detectors form a coded aperture array for two or three dimensional gamma ray source detection.

  17. The Cadmium Zinc Telluride Imager on AstroSat

    NASA Astrophysics Data System (ADS)

    Bhalerao, V.; Bhattacharya, D.; Vibhute, A.; Pawar, P.; Rao, A. R.; Hingar, M. K.; Khanna, Rakesh; Kutty, A. P. K.; Malkar, J. P.; Patil, M. H.; Arora, Y. K.; Sinha, S.; Priya, P.; Samuel, Essy; Sreekumar, S.; Vinod, P.; Mithun, N. P. S.; Vadawale, S. V.; Vagshette, N.; Navalgund, K. H.; Sarma, K. S.; Pandiyan, R.; Seetha, S.; Subbarao, K.

    2017-06-01

    The Cadmium Zinc Telluride Imager (CZTI) is a high energy, wide-field imaging instrument on AstroSat. CZTI's namesake Cadmium Zinc Telluride detectors cover an energy range from 20 keV to >200 keV, with 11% energy resolution at 60 keV. The coded aperture mask attains an angular resolution of 17^' over a 4.6° × 4.6° (FWHM) field-of-view. CZTI functions as an open detector above 100 keV, continuously sensitive to GRBs and other transients in about 30% of the sky. The pixellated detectors are sensitive to polarization above ˜ 100 keV, with exciting possibilities for polarization studies of transients and bright persistent sources. In this paper, we provide details of the complete CZTI instrument, detectors, coded aperture mask, mechanical and electronic configuration, as well as data and products.

  18. Hybrid finite element/waveguide mode analysis of passive RF devices

    NASA Astrophysics Data System (ADS)

    McGrath, Daniel T.

    1993-07-01

    A numerical solution for time-harmonic electromagnetic fields in two-port passive radio frequency (RF) devices has been developed, implemented in a computer code, and validated. Vector finite elements are used to represent the fields in the device interior, and field continuity across waveguide apertures is enforced by matching the interior solution to a sum of waveguide modes. Consequently, the mesh may end at the aperture instead of extending into the waveguide. The report discusses the variational formulation and its reduction to a linear system using Galerkin's method. It describes the computer code, including its interface to commercial CAD software used for geometry generation. It presents validation results for waveguide discontinuities, coaxial transitions, and microstrip circuits. They demonstrate that the method is an effective and versatile tool for predicting the performance of passive RF devices.

  19. Reverse slapper detonator

    DOEpatents

    Weingart, Richard C.

    1990-01-01

    A reverse slapper detonator (70), and methodology related thereto, are provided. The detonator (70) is adapted to be driven by a pulse of electric power from an external source (80). A conductor (20) is disposed along the top (14), side (18), and bottom (16) surfaces of a sheetlike insulator (12). Part of the conductor (20) comprises a bridge (28), and an aperture (30) is positioned within the conductor (20), with the bridge (28) and the aperture (30) located on opposite sides of the insulator (12). A barrel (40) and related explosive charge (50) are positioned adjacent to and in alignment with the aperture (30), and the bridge (28) is buttressed with a backing layer (60). When the electric power pulse vaporizes the bridge (28), a portion of the insulator (12) is propelled through the aperture (30) and barrel (40), and against the explosive charge (50), thereby detonating it.

  20. Adaptive EAGLE dynamic solution adaptation and grid quality enhancement

    NASA Technical Reports Server (NTRS)

    Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.

    1992-01-01

    In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.

  1. Breakthroughs in Low-Profile Leaky-Wave HPM Antennas

    DTIC Science & Technology

    2016-03-21

    distribution is unlimited. Successful HPM tests at AFRL/RDH (2007-8). Curved Aperture Waveguide Sidewall- Emitting Antenna (CAWSEA) 2009 Arched Aperture...model*. (Others have added various correction terms and expanded on it.) • R.C. Honey (1959) used these methods with much success with his “Flush...output beam Input *See the periodic technical reports delivered under ONR Contract # N00014-13-C-0352. 21 3/8/2016 12 Adapted from: Honey , R.C

  2. Reduced adaptability, but no fundamental disruption, of norm-based face coding following early visual deprivation from congenital cataracts.

    PubMed

    Rhodes, Gillian; Nishimura, Mayu; de Heering, Adelaide; Jeffery, Linda; Maurer, Daphne

    2017-05-01

    Faces are adaptively coded relative to visual norms that are updated by experience, and this adaptive coding is linked to face recognition ability. Here we investigated whether adaptive coding of faces is disrupted in individuals (adolescents and adults) who experience face recognition difficulties following visual deprivation from congenital cataracts in infancy. We measured adaptive coding using face identity aftereffects, where smaller aftereffects indicate less adaptive updating of face-coding mechanisms by experience. We also examined whether the aftereffects increase with adaptor identity strength, consistent with norm-based coding of identity, as in typical populations, or whether they show a different pattern indicating some more fundamental disruption of face-coding mechanisms. Cataract-reversal patients showed significantly smaller face identity aftereffects than did controls (Experiments 1 and 2). However, their aftereffects increased significantly with adaptor strength, consistent with norm-based coding (Experiment 2). Thus we found reduced adaptability but no fundamental disruption of norm-based face-coding mechanisms in cataract-reversal patients. Our results suggest that early visual experience is important for the normal development of adaptive face-coding mechanisms. © 2016 John Wiley & Sons Ltd.

  3. New approach for extraordinary transmission through an array of subwavelength apertures using thin ENNZ metamaterial liners.

    PubMed

    Baladi, Elham; Pollock, Justin G; Iyer, Ashwin K

    2015-08-10

    Extraordinary transmission (ET) through a periodic array of subwavelength apertures on a perfect metallic screen has been studied extensively in recent years, and has largely been attributed to diffraction effects, for which the periodicity of the apertures, rather than their dimensions, dominates the response. The transmission properties of the apertures at resonance, on the other hand, are not typically considered 'extraordinary' because they may be explained using more conventional aperture-theoretical mechanisms. This work describes a novel approach for achieving ET in which subwavelength apertures are made to resonate by lining them using thin, epsilon-negative and near-zero (ENNZ) metamaterials. The use of ENNZ metamaterials has recently proven successful in miniaturizing circular waveguides by strongly reducing their natural cutoff frequencies, and the theory is adapted here for the design of subwavelength apertures in a metallic screen. We present simulations and proof-of-concept measurements at microwave frequencies that demonstrate ET for apertures measuring one-quarter of a wavelength in diameter and suggest the potential for even more dramatic miniaturization simply by engineering the ENNZ metamaterial dispersion. The results exhibit a fano-like profile whose frequency varies with the properties of the metamaterial liner, but is independent of period. It is suggested that similar behaviour can be obtained at optical frequencies, where ENNZ metamaterials may be realized using appropriately arranged chains of plasmonic nanoparticles.

  4. Active Correction of Aperture Discontinuities-Optimized Stroke Minimization. I. A New Adaptive Interaction Matrix Algorithm

    NASA Astrophysics Data System (ADS)

    Mazoyer, J.; Pueyo, L.; N'Diaye, M.; Fogarty, K.; Zimmerman, N.; Leboulleux, L.; St. Laurent, K. E.; Soummer, R.; Shaklan, S.; Norman, C.

    2018-01-01

    Future searches for bio-markers on habitable exoplanets will rely on telescope instruments that achieve extremely high contrast at small planet-to-star angular separations. Coronagraphy is a promising starlight suppression technique, providing excellent contrast and throughput for off-axis sources on clear apertures. However, the complexity of space- and ground-based telescope apertures goes on increasing over time, owing to the combination of primary mirror segmentation, the secondary mirror, and its support structures. These discontinuities in the telescope aperture limit the coronagraph performance. In this paper, we present ACAD-OSM, a novel active method to correct for the diffractive effects of aperture discontinuities in the final image plane of a coronagraph. Active methods use one or several deformable mirrors that are controlled with an interaction matrix to correct for the aberrations in the pupil. However, they are often limited by the amount of aberrations introduced by aperture discontinuities. This algorithm relies on the recalibration of the interaction matrix during the correction process to overcome this limitation. We first describe the ACAD-OSM technique and compare it to the previous active methods for the correction of aperture discontinuities. We then show its performance in terms of contrast and off-axis throughput for static aperture discontinuities (segmentation, struts) and for some aberrations evolving over the life of the instrument (residual phase aberrations, artifacts in the aperture, misalignments in the coronagraph design). This technique can now obtain the Earth-like planet detection threshold of {10}10 contrast on any given aperture over at least a 10% spectral bandwidth, with several coronagraph designs.

  5. Incoherent digital holograms acquired by interferenceless coded aperture correlation holography system without refractive lenses.

    PubMed

    Kumar, Manoj; Vijayakumar, A; Rosen, Joseph

    2017-09-14

    We present a lensless, interferenceless incoherent digital holography technique based on the principle of coded aperture correlation holography. The acquired digital hologram by this technique contains a three-dimensional image of some observed scene. Light diffracted by a point object (pinhole) is modulated using a random-like coded phase mask (CPM) and the intensity pattern is recorded and composed as a point spread hologram (PSH). A library of PSHs is created using the same CPM by moving the pinhole to all possible axial locations. Intensity diffracted through the same CPM from an object placed within the axial limits of the PSH library is recorded by a digital camera. The recorded intensity this time is composed as the object hologram. The image of the object at any axial plane is reconstructed by cross-correlating the object hologram with the corresponding component of the PSH library. The reconstruction noise attached to the image is suppressed by various methods. The reconstruction results of multiplane and thick objects by this technique are compared with regular lens-based imaging.

  6. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  7. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGES

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  8. Extended residence time centrifugal contactor design modification and centrifugal contactor vane plate valving apparatus for extending mixing zone residence time

    DOEpatents

    Wardle, Kent E.

    2017-06-06

    The present invention provides an annular centrifugal contactor, having a housing adapted to receive a plurality of flowing liquids; a rotor on the interior of the housing; an annular mixing zone, wherein the annular mixing zone has a plurality of fluid retention reservoirs with ingress apertures near the bottom of the annular mixing zone and egress apertures located above the ingress apertures of the annular mixing zone; and an adjustable vane plate stem, wherein the stem can be raised to restrict the flow of a liquid into the rotor or lowered to increase the flow of the liquid into the rotor.

  9. Circuit breaker lock out assembly

    DOEpatents

    Gordy, W.T.

    1983-05-18

    A lock out assembly for a circuit breaker which consists of a generally step-shaped unitary base with an aperture in the small portion of the step-shaped base and a roughly S shaped retaining pin which loops through the large portion of the step-shaped base. The lock out assembly is adapted to fit over a circuit breaker with the handle switch projecting through the aperture, and the retaining pin projecting into an opening of the handle switch, preventing removal.

  10. Circuit breaker lock out assembly

    DOEpatents

    Gordy, Wade T.

    1984-01-01

    A lock out assembly for a circuit breaker which consists of a generally step-shaped unitary base with an aperture in the small portion of the step-shaped base and a roughly "S" shaped retaining pin which loops through the large portion of the step-shaped base. The lock out assembly is adapted to fit over a circuit breaker with the handle switch projecting through the aperture, and the retaining pin projecting into an opening of the handle switch, preventing removal.

  11. DM/LCWFC based adaptive optics system for large aperture telescopes imaging from visible to infrared waveband.

    PubMed

    Sun, Fei; Cao, Zhaoliang; Wang, Yukun; Zhang, Caihua; Zhang, Xingyun; Liu, Yong; Mu, Quanquan; Xuan, Li

    2016-11-28

    Almost all the deformable mirror (DM) based adaptive optics systems (AOSs) used on large aperture telescopes work at the infrared waveband due to the limitation of the number of actuators. To extend the imaging waveband to the visible, we propose a DM and Liquid crystal wavefront corrector (DM/LCWFC) combination AOS. The LCWFC is used to correct the high frequency aberration corresponding to the visible waveband and the aberrations of the infrared are corrected by the DM. The calculated results show that, to a 10 m telescope, DM/LCWFC AOS which contains a 1538 actuators DM and a 404 × 404 pixels LCWFC is equivalent to a DM based AOS with 4057 actuators. It indicates that the DM/LCWFC AOS is possible to work from visible to infrared for larger aperture telescopes. The simulations and laboratory experiment are performed for a 2 m telescope. The experimental results show that, after correction, near diffraction limited resolution USAF target images are obtained at the wavebands of 0.7-0.9 μm, 0.9-1.5 μm and 1.5-1.7 μm respectively. Therefore, the DM/LCWFC AOS may be used to extend imaging waveband of larger aperture telescope to the visible. It is very appropriate for the observation of spatial objects and the scientific research in astronomy.

  12. Vector Adaptive/Predictive Encoding Of Speech

    NASA Technical Reports Server (NTRS)

    Chen, Juin-Hwey; Gersho, Allen

    1989-01-01

    Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.

  13. Performance evaluation of spatial compounding in the presence of aberration and adaptive imaging

    NASA Astrophysics Data System (ADS)

    Dahl, Jeremy J.; Guenther, Drake; Trahey, Gregg E.

    2003-05-01

    Spatial compounding has been used for years to reduce speckle in ultrasonic images and to resolve anatomical features hidden behind the grainy appearance of speckle. Adaptive imaging restores image contrast and resolution by compensating for beamforming errors caused by tissue-induced phase errors. Spatial compounding represents a form of incoherent imaging, whereas adaptive imaging attempts to maintain a coherent, diffraction-limited aperture in the presence of aberration. Using a Siemens Antares scanner, we acquired single channel RF data on a commercially available 1-D probe. Individual channel RF data was acquired on a cyst phantom in the presence of a near field electronic phase screen. Simulated data was also acquired for both a 1-D and a custom built 8x96, 1.75-D probe (Tetrad Corp.). The data was compounded using a receive spatial compounding algorithm; a widely used algorithm because it takes advantage of parallel beamforming to avoid reductions in frame rate. Phase correction was also performed by using a least mean squares algorithm to estimate the arrival time errors. We present simulation and experimental data comparing the performance of spatial compounding to phase correction in contrast and resolution tasks. We evaluate spatial compounding and phase correction, and combinations of the two methods, under varying aperture sizes, aperture overlaps, and aberrator strength to examine the optimum configuration and conditions in which spatial compounding will provide a similar or better result than adaptive imaging. We find that, in general, phase correction is hindered at high aberration strengths and spatial frequencies, whereas spatial compounding is helped by these aberrators.

  14. Adaptive optics using a MEMS deformable mirror for a segmented mirror telescope

    NASA Astrophysics Data System (ADS)

    Miyamura, Norihide

    2017-09-01

    For small satellite remote sensing missions, a large aperture telescope more than 400mm is required to realize less than 1m GSD observations. However, it is difficult or expensive to realize the large aperture telescope using a monolithic primary mirror with high surface accuracy. A segmented mirror telescope should be studied especially for small satellite missions. Generally, not only high accuracy of optical surface but also high accuracy of optical alignment is required for large aperture telescopes. For segmented mirror telescopes, the alignment is more difficult and more important. For conventional systems, the optical alignment is adjusted before launch to achieve desired imaging performance. However, it is difficult to adjust the alignment for large sized optics in high accuracy. Furthermore, thermal environment in orbit and vibration in a launch vehicle cause the misalignments of the optics. We are developing an adaptive optics system using a MEMS deformable mirror for an earth observing remote sensing sensor. An image based adaptive optics system compensates the misalignments and wavefront aberrations of optical elements using the deformable mirror by feedback of observed images. We propose the control algorithm of the deformable mirror for a segmented mirror telescope by using of observed image. The numerical simulation results and experimental results show that misalignment and wavefront aberration of the segmented mirror telescope are corrected and image quality is improved.

  15. An adaptive beamforming method for ultrasound imaging based on the mean-to-standard-deviation factor.

    PubMed

    Wang, Yuanguo; Zheng, Chichao; Peng, Hu; Chen, Qiang

    2018-06-12

    The beamforming performance has a large impact on image quality in ultrasound imaging. Previously, several adaptive weighting factors including coherence factor (CF) and generalized coherence factor (GCF) have been proposed to improved image resolution and contrast. In this paper, we propose a new adaptive weighting factor for ultrasound imaging, which is called signal mean-to-standard-deviation factor (SMSF). SMSF is defined as the mean-to-standard-deviation of the aperture data and is used to weight the output of delay-and-sum (DAS) beamformer before image formation. Moreover, we develop a robust SMSF (RSMSF) by extending the SMSF to the spatial frequency domain using an altered spectrum of the aperture data. In addition, a square neighborhood average is applied on the RSMSF to offer a more smoothed square neighborhood RSMSF (SN-RSMSF) value. We compared our methods with DAS, CF, and GCF using simulated and experimental synthetic aperture data sets. The quantitative results show that SMSF results in an 82% lower full width at half-maximum (FWHM) but a 12% lower contrast ratio (CR) compared with CF. Moreover, the SN-RSMSF leads to 15% and 10% improvement, on average, in FWHM and CR compared with GCF while maintaining the speckle quality. This demonstrates that the proposed methods can effectively improve the image resolution and contrast. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Performance optimization of PM-16QAM transmission system enabled by real-time self-adaptive coding.

    PubMed

    Qu, Zhen; Li, Yao; Mo, Weiyang; Yang, Mingwei; Zhu, Shengxiang; Kilper, Daniel C; Djordjevic, Ivan B

    2017-10-15

    We experimentally demonstrate self-adaptive coded 5×100  Gb/s WDM polarization multiplexed 16 quadrature amplitude modulation transmission over a 100 km fiber link, which is enabled by a real-time control plane. The real-time optical signal-to-noise ratio (OSNR) is measured using an optical performance monitoring device. The OSNR measurement is processed and fed back using control plane logic and messaging to the transmitter side for code adaptation, where the binary data are adaptively encoded with three types of low-density parity-check (LDPC) codes with code rates of 0.8, 0.75, and 0.7 of large girth. The total code-adaptation latency is measured to be 2273 ms. Compared with transmission without adaptation, average net capacity improvements of 102%, 36%, and 7.5% are obtained, respectively, by adaptive LDPC coding.

  17. Design criteria for small coded aperture masks in gamma-ray astronomy

    NASA Technical Reports Server (NTRS)

    Sembay, S.; Gehrels, Neil

    1990-01-01

    Most theoretical work on coded aperture masks in X-ray and low-energy gamma-ray astronomy has concentrated on masks with large numbers of elements. For gamma-ray spectrometers in the MeV range, the detector plane usually has only a few discrete elements, so that masks with small numbers of elements are called for. For this case it is feasible to analyze by computer all the possible mask patterns of given dimension to find the ones that best satisfy the desired performance criteria. A particular set of performance criteria for comparing the flux sensitivities, source positioning accuracies and transparencies of different mask patterns is developed. The results of such a computer analysis for masks up to dimension 5 x 5 unit cell are presented and it is concluded that there is a great deal of flexibility in the choice of mask pattern for each dimension.

  18. Coded-aperture imaging of the Galactic center region at gamma-ray energies

    NASA Technical Reports Server (NTRS)

    Cook, Walter R.; Grunsfeld, John M.; Heindl, William A.; Palmer, David M.; Prince, Thomas A.

    1991-01-01

    The first coded-aperture images of the Galactic center region at energies above 30 keV have revealed two strong gamma-ray sources. One source has been identified with the X-ray source IE 1740.7 - 2942, located 0.8 deg away from the nucleus. If this source is at the distance of the Galactic center, it is one of the most luminous objects in the galaxy at energies from 35 to 200 keV. The second source is consistent in location with the X-ray source GX 354 + 0 (MXB 1728-34). In addition, gamma-ray flux from the location of GX 1 + 4 was marginally detected at a level consistent with other post-1980 measurements. No significant hard X-ray or gamma-ray flux was detected from the direction of the Galactic nucleus or from the direction of the recently discovered gamma-ray source GRS 1758-258.

  19. Implementation of Hadamard spectroscopy using MOEMS as a coded aperture

    NASA Astrophysics Data System (ADS)

    Vasile, T.; Damian, V.; Coltuc, D.; Garoi, F.; Udrea, C.

    2015-02-01

    Although nowadays spectrometers reached a high level of performance, output signals are often weak and traditional slit spectrometers still confronts the problem of poor optical throughput, minimizing their efficiency in low light setup conditions. In order to overcome these issues, Hadamard Spectroscopy (HS) was implemented in a conventional Ebert Fastie type of spectrometer setup, by substituting the exit slit with a digital micro-mirror device (DMD) who acts like a coded aperture. The theory behind HS and the functionality of the DMD are presented. The improvements brought using HS are enlightened by means of a spectrometric experiment and higher SNR spectrum is acquired. Comparative experiments were conducted in order to emphasize the SNR differences between HS and scanning slit method. Results provide a SNR gain of 3.35 favoring HS. One can conclude the HS method effectiveness to be a great asset for low light spectrometric experiments.

  20. Radiation and scattering from bodies of translation. Volume 2: User's manual, computer program documentation

    NASA Astrophysics Data System (ADS)

    Medgyesi-Mitschang, L. N.; Putnam, J. M.

    1980-04-01

    A hierarchy of computer programs implementing the method of moments for bodies of translation (MM/BOT) is described. The algorithm treats the far-field radiation and scattering from finite-length open cylinders of arbitrary cross section as well as the near fields and aperture-coupled fields for rectangular apertures on such bodies. The theoretical development underlying the algorithm is described in Volume 1. The structure of the computer algorithm is such that no a priori knowledge of the method of moments technique or detailed FORTRAN experience are presupposed for the user. A set of carefully drawn example problems illustrates all the options of the algorithm. For more detailed understanding of the workings of the codes, special cross referencing to the equations in Volume 1 is provided. For additional clarity, comment statements are liberally interspersed in the code listings, summarized in the present volume.

  1. Development of a multispectral autoradiography using a coded aperture

    NASA Astrophysics Data System (ADS)

    Noto, Daisuke; Takeda, Tohoru; Wu, Jin; Lwin, Thet T.; Yu, Quanwen; Zeniya, Tsutomu; Yuasa, Tetsuya; Hiranaka, Yukio; Itai, Yuji; Akatsuka, Takao

    2000-11-01

    Autoradiography is a useful imaging technique to understand biological functions using tracers including radio isotopes (RI's). However, it is not easy to describe the distribution of different kinds of tracers simultaneously by conventional autoradiography using X-ray film or Imaging plate. Each tracer describes each corresponding biological function. Therefore, if we can simultaneously estimate distribution of different kinds of tracer materials, the multispectral autoradiography must be a quite powerful tool to better understand physiological mechanisms of organs. So we are developing a system using a solid state detector (SSD) with high energy- resolution. Here, we introduce an imaging technique with a coded aperture to get spatial and spectral information more efficiently. In this paper, the imaging principle is described, and its validity and fundamental property are discussed by both simulation and phantom experiments with RI's such as 201Tl, 99mTc, 67Ga, and 123I.

  2. Reduction and coding of synthetic aperture radar data with Fourier transforms

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1995-01-01

    Recently, aboard the Space Radar Laboratory (SRL), the two roles of Fourier Transforms for ocean image synthesis and surface wave analysis have been implemented with a dedicated radar processor to significantly reduce Synthetic Aperture Radar (SAR) ocean data before transmission to the ground. The object was to archive the SAR image spectrum, rather than the SAR image itself, to reduce data volume and capture the essential descriptors of the surface wave field. SAR signal data are usually sampled and coded in the time domain for transmission to the ground where Fourier Transforms are applied both to individual radar pulses and to long sequences of radar pulses to form two-dimensional images. High resolution images of the ocean often contain no striking features and subtle image modulations by wind generated surface waves are only apparent when large ocean regions are studied, with Fourier transforms, to reveal periodic patterns created by wind stress over the surface wave field. Major ocean currents and atmospheric instability in coastal environments are apparent as large scale modulations of SAR imagery. This paper explores the possibility of computing complex Fourier spectrum codes representing SAR images, transmitting the coded spectra to Earth for data archives and creating scenes of surface wave signatures and air-sea interactions via inverse Fourier transformations with ground station processors.

  3. SYNMAG PHOTOMETRY: A FAST TOOL FOR CATALOG-LEVEL MATCHED COLORS OF EXTENDED SOURCES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bundy, Kevin; Yasuda, Naoki; Hogg, David W.

    2012-12-01

    Obtaining reliable, matched photometry for galaxies imaged by different observatories represents a key challenge in the era of wide-field surveys spanning more than several hundred square degrees. Methods such as flux fitting, profile fitting, and PSF homogenization followed by matched-aperture photometry are all computationally expensive. We present an alternative solution called 'synthetic aperture photometry' that exploits galaxy profile fits in one band to efficiently model the observed, point-spread-function-convolved light profile in other bands and predict the flux in arbitrarily sized apertures. Because aperture magnitudes are the most widely tabulated flux measurements in survey catalogs, producing synthetic aperture magnitudes (SYNMAGs) enablesmore » very fast matched photometry at the catalog level, without reprocessing imaging data. We make our code public and apply it to obtain matched photometry between Sloan Digital Sky Survey ugriz and UKIDSS YJHK imaging, recovering red-sequence colors and photometric redshifts with a scatter and accuracy as good as if not better than FWHM-homogenized photometry from the GAMA Survey. Finally, we list some specific measurements that upcoming surveys could make available to facilitate and ease the use of SYNMAGs.« less

  4. GPU COMPUTING FOR PARTICLE TRACKING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nishimura, Hiroshi; Song, Kai; Muriki, Krishna

    2011-03-25

    This is a feasibility study of using a modern Graphics Processing Unit (GPU) to parallelize the accelerator particle tracking code. To demonstrate the massive parallelization features provided by GPU computing, a simplified TracyGPU program is developed for dynamic aperture calculation. Performances, issues, and challenges from introducing GPU are also discussed. General purpose Computation on Graphics Processing Units (GPGPU) bring massive parallel computing capabilities to numerical calculation. However, the unique architecture of GPU requires a comprehensive understanding of the hardware and programming model to be able to well optimize existing applications. In the field of accelerator physics, the dynamic aperture calculationmore » of a storage ring, which is often the most time consuming part of the accelerator modeling and simulation, can benefit from GPU due to its embarrassingly parallel feature, which fits well with the GPU programming model. In this paper, we use the Tesla C2050 GPU which consists of 14 multi-processois (MP) with 32 cores on each MP, therefore a total of 448 cores, to host thousands ot threads dynamically. Thread is a logical execution unit of the program on GPU. In the GPU programming model, threads are grouped into a collection of blocks Within each block, multiple threads share the same code, and up to 48 KB of shared memory. Multiple thread blocks form a grid, which is executed as a GPU kernel. A simplified code that is a subset of Tracy++ [2] is developed to demonstrate the possibility of using GPU to speed up the dynamic aperture calculation by having each thread track a particle.« less

  5. 3D-printed coded apertures for x-ray backscatter radiography

    NASA Astrophysics Data System (ADS)

    Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David

    2017-09-01

    Many different mask patterns can be used for X-ray backscatter imaging using coded apertures, which can find application in the medical, industrial and security sectors. While some of these patterns may be considered to have a self-supporting structure, this is not the case for some of the most frequently used patterns such as uniformly redundant arrays or any pattern with a high open fraction. This makes mask construction difficult and usually requires a compromise in its design by drilling holes or adopting a no two holes touching version of the original pattern. In this study, this compromise was avoided by 3D printing a support structure that was then filled with a radiopaque material to create the completed mask. The coded masks were manufactured using two different methods, hot cast and cold cast. Hot casting involved casting a bismuth alloy at 80°C into the 3D printed acrylonitrile butadiene styrene mould which produced an absorber with density of 8.6 g cm-3. Cold casting was undertaken at room temperature, when a tungsten/epoxy composite was cast into a 3D printed polylactic acid mould. The cold cast procedure offered a greater density of around 9.6 to 10 g cm-3 and consequently greater X-ray attenuation. It was also found to be much easier to manufacture and more cost effective. A critical review of the manufacturing procedure is presented along with some typical images. In both cases the 3D printing process allowed square apertures to be created avoiding their approximation by circular holes when conventional drilling is used.

  6. Target-adaptive polarimetric synthetic aperture radar target discrimination using maximum average correlation height filters.

    PubMed

    Sadjadi, Firooz A; Mahalanobis, Abhijit

    2006-05-01

    We report the development of a technique for adaptive selection of polarization ellipse tilt and ellipticity angles such that the target separation from clutter is maximized. From the radar scattering matrix [S] and its complex components, in phase and quadrature phase, the elements of the Mueller matrix are obtained. Then, by means of polarization synthesis, the radar cross section of the radar scatters are obtained at different transmitting and receiving polarization states. By designing a maximum average correlation height filter, we derive a target versus clutter distance measure as a function of four transmit and receive polarization state angles. The results of applying this method on real synthetic aperture radar imagery indicate a set of four transmit and receive angles that lead to maximum target versus clutter discrimination. These optimum angles are different for different targets. Hence, by adaptive control of the state of polarization of polarimetric radar, one can noticeably improve the discrimination of targets from clutter.

  7. Advanced radiometric and interferometric milimeter-wave scene simulations

    NASA Technical Reports Server (NTRS)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  8. High-Capacity Communications from Martian Distances

    NASA Technical Reports Server (NTRS)

    Williams, W. Dan; Collins, Michael; Hodges, Richard; Orr, Richard S.; Sands, O. Scott; Schuchman, Leonard; Vyas, Hemali

    2007-01-01

    High capacity communications from Martian distances, required for the envisioned human exploration and desirable for data-intensive science missions, is challenging. NASA s Deep Space Network currently requires large antennas to close RF telemetry links operating at kilobit-per-second data rates. To accommodate higher rate communications, NASA is considering means to achieve greater effective aperture at its ground stations. This report, focusing on the return link from Mars to Earth, demonstrates that without excessive research and development expenditure, operational Mars-to-Earth RF communications systems can achieve data rates up to 1 Gbps by 2020 using technology that today is at technology readiness level (TRL) 4-5. Advanced technology to achieve the needed increase in spacecraft power and transmit aperture is feasible at an only moderate increase in spacecraft mass and technology risk. In addition, both power-efficient, near-capacity coding and modulation and greater aperture from the DSN array will be required. In accord with these results and conclusions, investment in the following technologies is recommended:(1) lightweight (1 kg/sq m density) spacecraft antenna systems; (2) a Ka-band receive ground array consisting of relatively small (10-15 m) antennas; (3) coding and modulation technology that reduces spacecraft power by at least 3 dB; and (4) efficient generation of kilowatt-level spacecraft RF power.

  9. Adaptation of reach-to-grasp movement in response to force perturbations.

    PubMed

    Rand, M K; Shimansky, Y; Stelmach, G E; Bloedel, J R

    2004-01-01

    This study examined how reach-to-grasp movements are modified during adaptation to external force perturbations applied on the arm during reach. Specifically, we examined whether the organization of these movements was dependent upon the condition under which the perturbation was applied. In response to an auditory signal, all subjects were asked to reach for a vertical dowel, grasp it between the index finger and thumb, and lift it a short distance off the table. The subjects were instructed to do the task as fast as possible. The perturbation was an elastic load acting on the wrist at an angle of 105 deg lateral to the reaching direction. The condition was modified by changing the predictability with which the perturbation was applied in a given trial. After recording unperturbed control trials, perturbations were applied first on successive trials (predictable perturbations) and then were applied randomly (unpredictable perturbations). In the early predictable perturbation trials, reach path length became longer and reaching duration increased. As more predictable perturbations were applied, the reach path length gradually decreased and became similar to that of control trials. Reaching duration also decreased gradually as the subjects adapted by exerting force against the perturbation. In addition, the amplitude of peak grip aperture during arm transport initially increased in response to repeated perturbations. During the course of learning, it reached its maximum and thereafter slightly decreased. However, it did not return to the normal level. The subjects also adapted to the unpredictable perturbations through changes in both arm transport and grasping components, indicating that they can compensate even when the occurrence of the perturbation cannot be predicted during the inter-trial interval. Throughout random perturbation trials, large grip aperture values were observed, suggesting that a conservative aperture level is set regardless of whether the reaching arm is perturbed or not. In addition, the results of the predictable perturbations showed that the time from movement onset to the onset of grip aperture closure changed as adaptation occurred. However, the spatial location where the onset of finger closure occurred showed minimum changes with perturbation. These data suggest that the onset of finger closure is dependent upon distance to target rather than the temporal relationship of the grasp relative to the transport phase of the movement.

  10. Facial expression coding in children and adolescents with autism: Reduced adaptability but intact norm-based coding.

    PubMed

    Rhodes, Gillian; Burton, Nichola; Jeffery, Linda; Read, Ainsley; Taylor, Libby; Ewing, Louise

    2018-05-01

    Individuals with autism spectrum disorder (ASD) can have difficulty recognizing emotional expressions. Here, we asked whether the underlying perceptual coding of expression is disrupted. Typical individuals code expression relative to a perceptual (average) norm that is continuously updated by experience. This adaptability of face-coding mechanisms has been linked to performance on various face tasks. We used an adaptation aftereffect paradigm to characterize expression coding in children and adolescents with autism. We asked whether face expression coding is less adaptable in autism and whether there is any fundamental disruption of norm-based coding. If expression coding is norm-based, then the face aftereffects should increase with adaptor expression strength (distance from the average expression). We observed this pattern in both autistic and typically developing participants, suggesting that norm-based coding is fundamentally intact in autism. Critically, however, expression aftereffects were reduced in the autism group, indicating that expression-coding mechanisms are less readily tuned by experience. Reduced adaptability has also been reported for coding of face identity and gaze direction. Thus, there appears to be a pervasive lack of adaptability in face-coding mechanisms in autism, which could contribute to face processing and broader social difficulties in the disorder. © 2017 The British Psychological Society.

  11. High-contrast imager for Complex Aperture Telescopes (HiCAT). 4. Status and wavefront control development

    NASA Astrophysics Data System (ADS)

    Leboulleux, Lucie; N'Diaye, Mamadou; Riggs, A. J. E.; Egron, Sylvain; Mazoyer, Johan; Pueyo, Laurent; Choquet, Elodie; Perrin, Marshall D.; Kasdin, Jeremy; Sauvage, Jean-François; Fusco, Thierry; Soummer, Rémi

    2016-07-01

    Segmented telescopes are a possible approach to enable large-aperture space telescopes for the direct imaging and spectroscopy of habitable worlds. However, the increased complexity of their aperture geometry, due to their central obstruction, support structures and segment gaps, makes high-contrast imaging very challenging. The High-contrast imager for Complex Aperture Telescopes (HiCAT) was designed to study and develop solutions for such telescope pupils using wavefront control and starlight suppression. The testbed design has the flexibility to enable studies with increasing complexity for telescope aperture geometries starting with off-axis telescopes, then on-axis telescopes with central obstruction and support structures (e.g. the Wide Field Infrared Survey Telescope [WFIRST]), up to on-axis segmented telescopes e.g. including various concepts for a Large UV, Optical, IR telescope (LUVOIR), such as the High Definition Space Telescope (HDST). We completed optical alignment in the summer of 2014 and a first deformable mirror was successfully integrated in the testbed, with a total wavefront error of 13nm RMS over a 18mm diameter circular pupil in open loop. HiCAT will also be provided with a segmented mirror conjugated with a shaped pupil representing the HDST configuration, to directly study wavefront control in the presence of segment gaps, central obstruction and spider. We recently applied a focal plane wavefront control method combined with a classical Lyot coronagraph on HiCAT, and we found limitations on contrast performance due to vibration effect. In this communication, we analyze this instability and study its impact on the performance of wavefront control algorithms. We present our Speckle Nulling code to control and correct for wavefront errors both in simulation mode and on testbed mode. This routine is first tested in simulation mode without instability to validate our code. We then add simulated vibrations to study the degradation of contrast performance in the presence of these effects.

  12. Reduced adaptability, but no fundamental disruption, of norm-based face-coding mechanisms in cognitively able children and adolescents with autism.

    PubMed

    Rhodes, Gillian; Ewing, Louise; Jeffery, Linda; Avard, Eleni; Taylor, Libby

    2014-09-01

    Faces are adaptively coded relative to visual norms that are updated by experience. This coding is compromised in autism and the broader autism phenotype, suggesting that atypical adaptive coding of faces may be an endophenotype for autism. Here we investigate the nature of this atypicality, asking whether adaptive face-coding mechanisms are fundamentally altered, or simply less responsive to experience, in autism. We measured adaptive coding, using face identity aftereffects, in cognitively able children and adolescents with autism and neurotypical age- and ability-matched participants. We asked whether these aftereffects increase with adaptor identity strength as in neurotypical populations, or whether they show a different pattern indicating a more fundamental alteration in face-coding mechanisms. As expected, face identity aftereffects were reduced in the autism group, but they nevertheless increased with adaptor strength, like those of our neurotypical participants, consistent with norm-based coding of face identity. Moreover, their aftereffects correlated positively with face recognition ability, consistent with an intact functional role for adaptive coding in face recognition ability. We conclude that adaptive norm-based face-coding mechanisms are basically intact in autism, but are less readily calibrated by experience. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. LLE review. Quarterly report, January 1994--March 1994, Volume 58

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simon, A.

    1994-07-01

    This volume of the LLE Review, covering the period Jan - Mar 1994, contains articles on backlighting diagnostics; the effect of electron collisions on ion-acoustic waves and heat flow; using PIC code simulations for analysis of ultrashort laser pulses interacting with solid targets; creating a new instrument for characterizing thick cryogenic layers; and a description of a large-aperture ring amplifier for laser-fusion drivers. Three of these articles - backlighting diagnostics; characterizing thick cryogenic layers; and large-aperture ring amplifier - are directly related to the OMEGA Upgrade, now under construction. Separate abstracts have been prepared for articles from this report.

  14. Optimum angle-cut of collimator for dense objects in high-energy proton radiography

    NASA Astrophysics Data System (ADS)

    Xu, Hai-Bo; Zheng, Na

    2016-02-01

    The use of minus identity lenses with an angle-cut collimator can achieve high contrast images in high-energy proton radiography. This article presents the principles of choosing the angle-cut aperture of the collimator for different energies and objects. Numerical simulation using the Monte Carlo code Geant4 has been implemented to investigate the entire radiography for the French test object. The optimum angle-cut apertures of the collimators are also obtained for different energies. Supported by NSAF (11176001) and Science and Technology Developing Foundation of China Academy of Engineering Physics (2012A0202006)

  15. Relay telescope for high power laser alignment system

    DOEpatents

    Dane, C. Brent; Hackel, Lloyd; Harris, Fritz B.

    2006-09-19

    A laser system includes an optical path having an intracavity relay telescope with a telescope focal point for imaging an output of the gain medium between an image location at or near the gain medium and an image location at or near an output coupler for the laser system. A kinematic mount is provided within a vacuum chamber, and adapted to secure beam baffles near the telescope focal point. An access port on the vacuum chamber is adapted for allowing insertion and removal of the beam baffles. A first baffle formed using an alignment pinhole aperture is used during alignment of the laser system. A second tapered baffle replaces the alignment aperture during operation and acts as a far-field baffle in which off angle beams strike the baffle a grazing angle of incidence, reducing fluence levels at the impact areas.

  16. Adaptive coding of MSS imagery. [Multi Spectral band Scanners

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Samulon, A. S.; Fultz, G. L.; Lumb, D.

    1977-01-01

    A number of adaptive data compression techniques are considered for reducing the bandwidth of multispectral data. They include adaptive transform coding, adaptive DPCM, adaptive cluster coding, and a hybrid method. The techniques are simulated and their performance in compressing the bandwidth of Landsat multispectral images is evaluated and compared using signal-to-noise ratio and classification consistency as fidelity criteria.

  17. Optimization of sparse synthetic transmit aperture imaging with coded excitation and frequency division.

    PubMed

    Behar, Vera; Adam, Dan

    2005-12-01

    An effective aperture approach is used for optimization of a sparse synthetic transmit aperture (STA) imaging system with coded excitation and frequency division. A new two-stage algorithm is proposed for optimization of both the positions of the transmit elements and the weights of the receive elements. In order to increase the signal-to-noise ratio in a synthetic aperture system, temporal encoding of the excitation signals is employed. When comparing the excitation by linear frequency modulation (LFM) signals and phase shift key modulation (PSKM) signals, the analysis shows that chirps are better for excitation, since at the output of a compression filter the sidelobes generated are much smaller than those produced by the binary PSKM signals. Here, an implementation of a fast STA imaging is studied by spatial encoding with frequency division of the LFM signals. The proposed system employs a 64-element array with only four active elements used during transmit. The two-dimensional point spread function (PSF) produced by such a sparse STA system is compared to the PSF produced by an equivalent phased array system, using the Field II simulation program. The analysis demonstrates the superiority of the new sparse STA imaging system while using coded excitation and frequency division. Compared to a conventional phased array imaging system, this system acquires images of equivalent quality 60 times faster, when the transmit elements are fired in pairs consecutively and the power level used during transmit is very low. The fastest acquisition time is achieved when all transmit elements are fired simultaneously, which improves detectability, but at the cost of a slight degradation of the axial resolution. In real-time implementation, however, it must be borne in mind that the frame rate of a STA imaging system depends not only on the acquisition time of the data but also on the processing time needed for image reconstruction. Comparing to phased array imaging, a significant increase in the frame rate of a STA imaging system is possible if and only if an equivalent time efficient algorithm is used for image reconstruction.

  18. Imaging of spatially extended hot spots with coded apertures for intra-operative nuclear medicine applications

    NASA Astrophysics Data System (ADS)

    Kaissas, I.; Papadimitropoulos, C.; Potiriadis, C.; Karafasoulis, K.; Loukas, D.; Lambropoulos, C. P.

    2017-01-01

    Coded aperture imaging transcends planar imaging with conventional collimators in efficiency and Field of View (FOV). We present experimental results for the detection of 141 keV and 122 keV γ-photons emitted by uniformly extended 99mTc and 57Co hot-spots along with simulations of uniformly and normally extended 99mTc hot-spots. These results prove that the method can be used for intra-operative imaging of radio-traced sentinel nodes and thyroid remnants. The study is performed using a setup of two gamma cameras, each consisting of a coded-aperture (or mask) of Modified Uniformly Redundant Array (MURA) of rank 19 positioned on top of a CdTe detector. The detector pixel pitch is 350 μm and its active area is 4.4 × 4.4 cm2, while the mask element size is 1.7 mm. The detectable photon energy ranges from 15 keV up to 200 keV with an energy resolution of 3-4 keV FWHM. Triangulation is exploited to estimate the 3D spatial coordinates of the radioactive spots within the system FOV. Two extended sources, with uniform distributed activity (11 and 24 mm in diameter, respectively), positioned at 16 cm from the system and with 3 cm distance between their centers, can be resolved and localized with accuracy better than 5%. The results indicate that the estimated positions of spatially extended sources lay within their volume size and that neighboring sources, even with a low level of radioactivity, such as 30 MBq, can be clearly distinguished with an acquisition time about 3 seconds.

  19. Accuracy assessment and characterization of x-ray coded aperture coherent scatter spectral imaging for breast cancer classification

    PubMed Central

    Lakshmanan, Manu N.; Greenberg, Joel A.; Samei, Ehsan; Kapadia, Anuj J.

    2017-01-01

    Abstract. Although transmission-based x-ray imaging is the most commonly used imaging approach for breast cancer detection, it exhibits false negative rates higher than 15%. To improve cancer detection accuracy, x-ray coherent scatter computed tomography (CSCT) has been explored to potentially detect cancer with greater consistency. However, the 10-min scan duration of CSCT limits its possible clinical applications. The coded aperture coherent scatter spectral imaging (CACSSI) technique has been shown to reduce scan time through enabling single-angle imaging while providing high detection accuracy. Here, we use Monte Carlo simulations to test analytical optimization studies of the CACSSI technique, specifically for detecting cancer in ex vivo breast samples. An anthropomorphic breast tissue phantom was modeled, a CACSSI imaging system was virtually simulated to image the phantom, a diagnostic voxel classification algorithm was applied to all reconstructed voxels in the phantom, and receiver-operator characteristics analysis of the voxel classification was used to evaluate and characterize the imaging system for a range of parameters that have been optimized in a prior analytical study. The results indicate that CACSSI is able to identify the distribution of cancerous and healthy tissues (i.e., fibroglandular, adipose, or a mix of the two) in tissue samples with a cancerous voxel identification area-under-the-curve of 0.94 through a scan lasting less than 10 s per slice. These results show that coded aperture scatter imaging has the potential to provide scatter images that automatically differentiate cancerous and healthy tissue within ex vivo samples. Furthermore, the results indicate potential CACSSI imaging system configurations for implementation in subsequent imaging development studies. PMID:28331884

  20. Fuel leak detection apparatus for gas cooled nuclear reactors

    DOEpatents

    Burnette, Richard D.

    1977-01-01

    Apparatus is disclosed for detecting nuclear fuel leaks within nuclear power system reactors, such as high temperature gas cooled reactors. The apparatus includes a probe assembly that is inserted into the high temperature reactor coolant gaseous stream. The probe has an aperture adapted to communicate gaseous fluid between its inside and outside surfaces and also contains an inner tube for sampling gaseous fluid present near the aperture. A high pressure supply of noncontaminated gas is provided to selectively balance the pressure of the stream being sampled to prevent gas from entering the probe through the aperture. The apparatus includes valves that are operable to cause various directional flows and pressures, which valves are located outside of the reactor walls to permit maintenance work and the like to be performed without shutting down the reactor.

  1. Can-out hatch assembly and positioning system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Basnar, P.J.; Frank, R.C.; Hoh, J.C.

    1985-07-03

    A can-out hatch assembly is adapted to engage in a sealed manner the upper end of a covered sealed container around an aperture in a sealed chamber and to remove the cover from the container permitting a contaminant to be transferred between the container and the chamber while isolating internal portions of the container and chamber from the surrounding environment. A swing bracket is coupled at a first end thereof to the inner, lower wall of the sealed container adjacent to the aperture therein. To a second end of the swing bracket is mounted a hatch cover which may bemore » positioned in sealed engagement about the chamber's aperture by rotating the hatch cover in a first direction when the swing bracket is in the full down position. Rotation of the hatch cover in a second direction release it from sealed engagement with the chamber's aperture. A lid support rod also coupled to the second end of the swing bracket and inserted through an aperture in the center of the hatch cover may be rotated for theadably engaging the container's cover whereupon the cover may be removed from the container and the hatch cover displaced from the aperture by pivoting displacement of the swing bracket. The contaminant may then be either removed from the container and placed within the sealed chamber, or vice versa, followed by positioning of the cover upon the container and the hatch cover over the aperture in a sealed manner.« less

  2. Can-out hatch assembly and positioning system

    DOEpatents

    Basnar, P.J.; Frank, R.C.; Hoh, J.C.

    1985-07-03

    A can-out hatch assembly is adapted to engage in a sealed manner the upper end of a covered sealed container around an aperture in a sealed chamber and to remove the cover from the container permitting a contaminant to be transferred between the container and the chamber while isolating internal portions of the container and chamber from the surrounding environment. A swing bracket is coupled at a first end thereof to the inner, lower wall of the sealed container adjacent to the aperture therein. To a second end of the swing bracket is mounted a hatch cover which may be positioned in sealed engagement about the chamber's aperture by rotating the hatch cover in a first direction when the swing bracket is in the full down position. Rotation of the hatch cover in a second direction release it from sealed engagement with the chamber's aperture. A lid support rod also coupled to the second end of the swing bracket and inserted through an aperture in the center of the hatch cover may be rotated for theadably engaging the container's cover whereupon the cover may be removed from the container and the hatch cover displaced from the aperture by pivoting displacement of the swing bracket. The contaminant may then be either removed from the container and placed within the sealed chamber, or vice versa, followed by positioning of the cover upon the container and the hatch cover over the aperture in a sealed manner.

  3. Can-out hatch assembly and positioning system

    DOEpatents

    Basnar, Paul J.; Frank, Robert C.; Hoh, Joseph C.

    1986-01-01

    A can-out hatch assembly is adapted to engage in a sealed manner the upper end of a covered sealed container around an aperture in a sealed chamber and to remove the cover from the container permitting a contaminant to be transferred between the container and the chamber while isolating internal portions of the container and chamber from the surrounding environment. A swing bracket is coupled at a first end thereof to the inner, lower wall of the sealed container adjacent to the aperture therein. To a second end of the swing bracket is mounted a hatch cover which may be positioned in sealed engagement about the chamber's aperture by rotating the hatch cover in a first direction when the swing bracket is in the full down position. Rotation of the hatch cover in a second direction releases it from sealed engagement with the chamber's aperture. A lid support rod also coupled to the second end of the swing bracket and inserted through an aperture in the center of the hatch cover may be rotated for threadably engaging the container's cover whereupon the cover may be removed from the container and the hatch cover displaced from the aperture by pivoting displacement of the swing bracket. The contaminant may then be either removed from the container and placed within the sealed chamber, or vice versa, followed by positioning of the cover upon the container and the hatch cover over the aperture in a sealed manner.

  4. Can-out hatch assembly and positioning system

    DOEpatents

    Basnar, Paul J.; Frank, Robert C.; Hoh, Joseph C.

    1986-01-07

    A can-out hatch assembly is adapted to engage in a sealed manner the upper end of a covered sealed container around an aperture in a sealed chamber and to remove the cover from the container permitting a contaminant to be transferred between the container and the chamber while isolating internal portions of the container and chamber from the surrounding environment. A swing bracket is coupled at a first end thereof to the inner, lower wall of the sealed container adjacent to the aperture therein. To a second end of the swing bracket is mounted a hatch cover which may be positioned in sealed engagement about the chamber's aperture by rotating the hatch cover in a first direction when the swing bracket is in the full down position. Rotation of the hatch cover in a second direction releases it from sealed engagement with the chamber's aperture. A lid support rod also coupled to the second end of the swing bracket and inserted through an aperture in the center of the hatch cover may be rotated for threadably engaging the container's cover whereupon the cover may be removed from the container and the hatch cover displaced from the aperture by pivoting displacement of the swing bracket. The contaminant may then be either removed from the container and placed within the sealed chamber, or vice versa, followed by positioning of the cover upon the container and the hatch cover over the aperture in a sealed manner.

  5. Simulation of co-phase error correction of optical multi-aperture imaging system based on stochastic parallel gradient decent algorithm

    NASA Astrophysics Data System (ADS)

    He, Xiaojun; Ma, Haotong; Luo, Chuanxin

    2016-10-01

    The optical multi-aperture imaging system is an effective way to magnify the aperture and increase the resolution of telescope optical system, the difficulty of which lies in detecting and correcting of co-phase error. This paper presents a method based on stochastic parallel gradient decent algorithm (SPGD) to correct the co-phase error. Compared with the current method, SPGD method can avoid detecting the co-phase error. This paper analyzed the influence of piston error and tilt error on image quality based on double-aperture imaging system, introduced the basic principle of SPGD algorithm, and discuss the influence of SPGD algorithm's key parameters (the gain coefficient and the disturbance amplitude) on error control performance. The results show that SPGD can efficiently correct the co-phase error. The convergence speed of the SPGD algorithm is improved with the increase of gain coefficient and disturbance amplitude, but the stability of the algorithm reduced. The adaptive gain coefficient can solve this problem appropriately. This paper's results can provide the theoretical reference for the co-phase error correction of the multi-aperture imaging system.

  6. Design and analysis of a fast, two-mirror soft-x-ray microscope

    NASA Technical Reports Server (NTRS)

    Shealy, D. L.; Wang, C.; Jiang, W.; Jin, L.; Hoover, R. B.

    1992-01-01

    During the past several years, a number of investigators have addressed the design, analysis, fabrication, and testing of spherical Schwarzschild microscopes for soft-x-ray applications using multilayer coatings. Some of these systems have demonstrated diffraction limited resolution for small numerical apertures. Rigorously aplanatic, two-aspherical mirror Head microscopes can provide near diffraction limited resolution for very large numerical apertures. The relationships between the numerical aperture, mirror radii and diameters, magnifications, and total system length for Schwarzschild microscope configurations are summarized. Also, an analysis of the characteristics of the Head-Schwarzschild surfaces will be reported. The numerical surface data predicted by the Head equations were fit by a variety of functions and analyzed by conventional optical design codes. Efforts have been made to determine whether current optical substrate and multilayer coating technologies will permit construction of a very fast Head microscope which can provide resolution approaching that of the wavelength of the incident radiation.

  7. Fast neutron counting in a mobile, trailer-based search platform

    NASA Astrophysics Data System (ADS)

    Hayward, Jason P.; Sparger, John; Fabris, Lorenzo; Newby, Robert J.

    2017-12-01

    Trailer-based search platforms for detection of radiological and nuclear threats are often based upon coded aperture gamma-ray imaging, because this method can be rendered insensitive to local variations in gamma background while still localizing the source well. Since gamma source emissions are rather easily shielded, in this work we consider the addition of fast neutron counting to a mobile platform for detection of sources containing Pu. A proof-of-concept system capable of combined gamma and neutron coded-aperture imaging was built inside of a trailer and used to detect a 252Cf source while driving along a roadway. Neutron detector types employed included EJ-309 in a detector plane and EJ-299-33 in a front mask plane. While the 252Cf gamma emissions were not readily detectable while driving by at 16.9 m standoff, the neutron emissions can be detected while moving. Mobile detection performance for this system and a scaled-up system design are presented, along with implications for threat sensing.

  8. Large-area PSPMT based gamma-ray imager with edge reclamation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ziock, K-P; Nakae, L

    2000-09-21

    We describe a coded aperture, gamma-ray imager which uses a CsI(Na) scintillator coupled to an Hamamatsu R3292 position-sensitive photomultiplier tube (PSPMT) as the position-sensitive detector. We have modified the normal resistor divider readout of the PSPMT to allow use of nearly the full 10 cm diameter active area of the PSPMT with a single scintillator crystal one centimeter thick. This is a significant performance improvement over that obtained with the standard readout technique where the linearity and position resolution start to degrade at radii as small as 3.5 cm with a crystal 0.75 crn thick. This represents a recovery ofmore » over 60% of the PSPMT active area. The performance increase allows the construction of an imager with a field of view 20 resolution elements in diameter with useful quantum efficiency from 60-700 keV. In this paper we describe the readout technique, its implementation in a coded aperture imager and the performance of that imager.« less

  9. Individual differences in adaptive coding of face identity are linked to individual differences in face recognition ability.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Hayward, William G; Ewing, Louise

    2014-06-01

    Despite their similarity as visual patterns, we can discriminate and recognize many thousands of faces. This expertise has been linked to 2 coding mechanisms: holistic integration of information across the face and adaptive coding of face identity using norms tuned by experience. Recently, individual differences in face recognition ability have been discovered and linked to differences in holistic coding. Here we show that they are also linked to individual differences in adaptive coding of face identity, measured using face identity aftereffects. Identity aftereffects correlated significantly with several measures of face-selective recognition ability. They also correlated marginally with own-race face recognition ability, suggesting a role for adaptive coding in the well-known other-race effect. More generally, these results highlight the important functional role of adaptive face-coding mechanisms in face expertise, taking us beyond the traditional focus on holistic coding mechanisms. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  10. Drought stress modulates oxylipin signature by eliciting 12-OPDA as a potent regulator of stomatal aperture.

    PubMed

    Savchenko, Tatyana; Dehesh, Katayoon

    2014-01-01

    Through evolution, plants have developed a myriad of strategies to adapt to environmental perturbations. Using 3 Arabidopsis ecotypes in conjunction with various transgenic and mutant lines, we provide evidence that wounding and drought differentially alter the metabolic signatures derived from the 2 main competing oxylipin-pathway branches, namely the JA and its precursor 12-OPDA produced by Allene oxide synthase (AOS) branch, and aldehydes and corresponding alcohols generated by Hydroperoxide lyase (HPL) branch. Specifically, we show that wounding induces production of both HPL and AOS-derived metabolites whereas, drought stress only elicits production of hexenal but suppresses hexenol, and further uncouples the conversion of 12-OPDA to JA. This finding led to uncovering of 12-OPDA as a functional convergence point of oxylipin and ABA pathways to control stomatal aperture in plant adaptive responses to drought. In addition, using transgenic lines overexpressing plastidial and extraplastidial HPL enzyme establish the strong interdependence of AOS- and HPL-branch pathways, and the importance of this linkage in tailoring plant adaptive responses to the nature of perturbations.

  11. Relay telescope including baffle, and high power laser amplifier utilizing the same

    DOEpatents

    Dane, C. Brent; Hackel, Lloyd; Harris, Fritz B.

    2006-09-19

    A laser system includes an optical path having an intracavity relay telescope with a telescope focal point for imaging an output of the gain medium between an image location at or near the gain medium and an image location at or near an output coupler for the laser system. A kinematic mount is provided within a vacuum chamber, and adapted to secure beam baffles near the telescope focal point. An access port on the vacuum chamber is adapted for allowing insertion and removal of the beam baffles. A first baffle formed using an alignment pinhole aperture is used during alignment of the laser system. A second tapered baffle replaces the alignment aperture during operation and acts as a far-field baffle in which off angle beams strike the baffle a grazing angle of incidence, reducing fluence levels at the impact areas.

  12. Multidisciplinary Analysis and Optimal Design: As Easy as it Sounds?

    NASA Technical Reports Server (NTRS)

    Moore, Greg; Chainyk, Mike; Schiermeier, John

    2004-01-01

    The viewgraph presentation examines optimal design for precision, large aperture structures. Discussion focuses on aspects of design optimization, code architecture and current capabilities, and planned activities and collaborative area suggestions. The discussion of design optimization examines design sensitivity analysis; practical considerations; and new analytical environments including finite element-based capability for high-fidelity multidisciplinary analysis, design sensitivity, and optimization. The discussion of code architecture and current capabilities includes basic thermal and structural elements, nonlinear heat transfer solutions and process, and optical modes generation.

  13. Range Sidelobe Response from the Use of Polyphase Signals in Spotlight Synthetic Aperture Radar

    DTIC Science & Technology

    2015-12-01

    come to closure. I also want to thank my mother for raising me and instilling in me the work ethic and values that have propelled me through life. I...to describe the poly-phase signals at baseband. IQ notation is preferred for complex waveforms because it allows for an easy mathematical...variables. 15 Once the Frank-coded phase vector is created, the IQ signal generation discussed in Chapter II was used to generate a Frank-code phase

  14. Autistic traits are linked to reduced adaptive coding of face identity and selectively poorer face recognition in men but not women.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Taylor, Libby; Ewing, Louise

    2013-11-01

    Our ability to discriminate and recognize thousands of faces despite their similarity as visual patterns relies on adaptive, norm-based, coding mechanisms that are continuously updated by experience. Reduced adaptive coding of face identity has been proposed as a neurocognitive endophenotype for autism, because it is found in autism and in relatives of individuals with autism. Autistic traits can also extend continuously into the general population, raising the possibility that reduced adaptive coding of face identity may be more generally associated with autistic traits. In the present study, we investigated whether adaptive coding of face identity decreases as autistic traits increase in an undergraduate population. Adaptive coding was measured using face identity aftereffects, and autistic traits were measured using the Autism-Spectrum Quotient (AQ) and its subscales. We also measured face and car recognition ability to determine whether autistic traits are selectively related to face recognition difficulties. We found that men who scored higher on levels of autistic traits related to social interaction had reduced adaptive coding of face identity. This result is consistent with the idea that atypical adaptive face-coding mechanisms are an endophenotype for autism. Autistic traits were also linked with face-selective recognition difficulties in men. However, there were some unexpected sex differences. In women, autistic traits were linked positively, rather than negatively, with adaptive coding of identity, and were unrelated to face-selective recognition difficulties. These sex differences indicate that autistic traits can have different neurocognitive correlates in men and women and raise the intriguing possibility that endophenotypes of autism can differ in males and females. © 2013 Elsevier Ltd. All rights reserved.

  15. Spatially adaptive migration tomography for multistatic GPR imaging

    DOEpatents

    Paglieroni, David W; Beer, N. Reginald

    2013-08-13

    A method and system for detecting the presence of subsurface objects within a medium is provided. In some embodiments, the imaging and detection system operates in a multistatic mode to collect radar return signals generated by an array of transceiver antenna pairs that is positioned across the surface and that travels down the surface. The imaging and detection system pre-processes the return signal to suppress certain undesirable effects. The imaging and detection system then generates synthetic aperture radar images from real aperture radar images generated from the pre-processed return signal. The imaging and detection system then post-processes the synthetic aperture radar images to improve detection of subsurface objects. The imaging and detection system identifies peaks in the energy levels of the post-processed image frame, which indicates the presence of a subsurface object.

  16. Fast-response variable focusing micromirror array lens

    NASA Astrophysics Data System (ADS)

    Boyd, James G., IV; Cho, Gyoungil

    2003-07-01

    A reflective type Fresnel lens using an array of micromirrors is designed and fabricated using the MUMPs® surface micromachining process. The focal length of the lens can be rapidly changed by controlling both the rotation and translation of electrostatically actuated micromirrors. The rotation converges rays and the translation adjusts the optical path length difference of the rays to be integer multiples of the wavelength. The suspension spring, pedestal and electrodes are located under the mirror to maximize the optical efficiency. Relations are provided for the fill-factor and the numerical aperture as functions of the lens diameter, the mirror size, and the tolerances specified by the MUMPs® design rules. The fabricated lens is 1.8mm in diameter, and each micromirror is approximately 100mm x 100mm. The lens fill-factor is 83.7%, the numerical aperture is 0.018 for a wavelength of 632.8nm, and the resolution is approximately 22mm, whereas the resolution of a perfect aberration-free lens is 21.4μm for a NA of 0.018. The focal length ranges from 11.3mm to infinity. The simulated Strehl ratio, which is the ratio of the point spread function maximum intensity to the theoretical diffraction-limited PSF maximum intensity, is 31.2%. A mechanical analysis was performed using the finite element code IDEAS. The combined maximum rotation and translation produces a maximum stress of 301MPa, below the yield strength of polysilicon, 1.21 to 1.65GPa. Potential applications include adaptive microscope lenses for scanning particle imaging velocimetry and a visually aided micro-assembly.

  17. ASTROPOP: ASTROnomical Polarimetry and Photometry pipeline

    NASA Astrophysics Data System (ADS)

    Campagnolo, Julio C. N.

    2018-05-01

    AstroPoP reduces almost any CCD photometry and image polarimetry data. For photometry reduction, the code performs source finding, aperture and PSF photometry, astrometry calibration using different automated and non-automated methods and automated source identification and magnitude calibration based on online and local catalogs. For polarimetry, the code resolves linear and circular Stokes parameters produced by image beam splitter or polarizer polarimeters. In addition to the modular functions, ready-to-use pipelines based in configuration files and header keys are also provided with the code. AstroPOP was initially developed to reduce the IAGPOL polarimeter data installed at Observatório Pico dos Dias (Brazil).

  18. Simulation of patch and slot antennas using FEM with prismatic elements and investigations of artificial absorber mesh termination schemes

    NASA Technical Reports Server (NTRS)

    Gong, J.; Ozdemir, T.; Volakis, J; Nurnberger, M.

    1995-01-01

    Year 1 progress can be characterized with four major achievements which are crucial toward the development of robust, easy to use antenna analysis code on doubly conformal platforms. (1) A new FEM code was developed using prismatic meshes. This code is based on a new edge based distorted prism and is particularly attractive for growing meshes associated with printed slot and patch antennas on doubly conformal platforms. It is anticipated that this technology will lead to interactive, simple to use codes for a large class of antenna geometries. Moreover, the codes can be expanded to include modeling of the circuit characteristics. An attached report describes the theory and validation of the new prismatic code using reference calculations and measured data collected at the NASA Langley facilities. The agreement between the measured and calculated data is impressive even for the coated patch configuration. (2) A scheme was developed for improved feed modeling in the context of FEM. A new approach based on the voltage continuity condition was devised and successfully tested in modeling coax cables and aperture fed antennas. An important aspect of this new feed modeling approach is the ability to completely separate the feed and antenna mesh regions. In this manner, different elements can be used in each of the regions leading to substantially improved accuracy and meshing simplicity. (3) A most important development this year has been the introduction of the perfectly matched interface (PMI) layer for truncating finite element meshes. So far the robust boundary integral method has been used for truncating the finite element meshes. However, this approach is not suitable for antennas on nonplanar platforms. The PMI layer is a lossy anisotropic absorber with zero reflection at its interface. (4) We were able to interface our antenna code FEMA_CYL (for antennas on cylindrical platforms) with a standard high frequency code. This interface was achieved by first generating equivalent magnetic currents across the antenna aperture using the FEM code. These currents were employed as the sources in the high frequency code.

  19. Experimental Study of Super-Resolution Using a Compressive Sensing Architecture

    DTIC Science & Technology

    2015-03-01

    Intelligence 24(9), 1167–1183 (2002). [3] Lin, Z. and Shum, H.-Y., “Fundamental limits of reconstruction-based superresolution algorithms under local...IEEE Transactions on 52, 1289–1306 (April 2006). [9] Marcia, R. and Willett, R., “Compressive coded aperture superresolution image reconstruction,” in

  20. Vision Aided Inertial Navigation System Augmented with a Coded Aperture

    DTIC Science & Technology

    2011-03-24

    as the change in blur at different distances from the pixel plane can be inferred. Cameras with a micro lens array (called plenoptic cameras...images from 8 slightly different perspectives [14,43]. Dappled photography is a similar to the plenoptic camera approach except that a cosine mask

  1. FPGA-based rate-adaptive LDPC-coded modulation for the next generation of optical communication systems.

    PubMed

    Zou, Ding; Djordjevic, Ivan B

    2016-09-05

    In this paper, we propose a rate-adaptive FEC scheme based on LDPC codes together with its software reconfigurable unified FPGA architecture. By FPGA emulation, we demonstrate that the proposed class of rate-adaptive LDPC codes based on shortening with an overhead from 25% to 42.9% provides a coding gain ranging from 13.08 dB to 14.28 dB at a post-FEC BER of 10-15 for BPSK transmission. In addition, the proposed rate-adaptive LDPC coding combined with higher-order modulations have been demonstrated including QPSK, 8-QAM, 16-QAM, 32-QAM, and 64-QAM, which covers a wide range of signal-to-noise ratios. Furthermore, we apply the unequal error protection by employing different LDPC codes on different bits in 16-QAM and 64-QAM, which results in additional 0.5dB gain compared to conventional LDPC coded modulation with the same code rate of corresponding LDPC code.

  2. Correlation Between Measured Noise And Its Visual Perception.

    NASA Astrophysics Data System (ADS)

    Bollen, Romain

    1986-06-01

    For obvious reasons people in the field claim that measured data do not agree with what they perceive. Scientists reply by saying that their data are "true". Are they? Since images are made to be looked at, a request for data meaningful for what is perceived, is not foolish. We show that, when noise is characterized by standard density fluctuation figures, a good correlation with noise perception by the naked eye on a large size radiograph is obtained in applying microdensitometric scanning with a 400 micron aperture. For other viewing conditions the aperture size has to be adapted.

  3. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    PubMed

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  4. GUIELOA: Adaptive Optics System for the 2.1-m SPM UNAM Telescope

    NASA Astrophysics Data System (ADS)

    Cuevas, S.; Iriarte, A.; Martínez, L. A.; Garfias, F.; Sánchez, L.; Chapa, O.; Ruelas, R. A.

    2004-08-01

    GUIELOA is the adaptive optics system project for the 2.1-m SPM telescope. This is a 19 sub-apertures curvature-type system. It corrects 8 Zernike terms. GUIELOA is very similar to PUEO, the CFHT adaptive optics system and compensates the atmospheric turbulence from the R band to the K band. Among the planned applications of GUIELOA are the study of OB binary systems, the detection of close binary stars, and the study of disks, jets and other phenomena associated with young stars.

  5. Modeling Single Well Injection-Withdrawal (SWIW) Tests for Characterization of Complex Fracture-Matrix Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotte, F.P.; Doughty, C.; Birkholzer, J.

    2010-11-01

    The ability to reliably predict flow and transport in fractured porous rock is an essential condition for performance evaluation of geologic (underground) nuclear waste repositories. In this report, a suite of programs (TRIPOLY code) for calculating and analyzing flow and transport in two-dimensional fracture-matrix systems is used to model single-well injection-withdrawal (SWIW) tracer tests. The SWIW test, a tracer test using one well, is proposed as a useful means of collecting data for site characterization, as well as estimating parameters relevant to tracer diffusion and sorption. After some specific code adaptations, we numerically generated a complex fracture-matrix system for computationmore » of steady-state flow and tracer advection and dispersion in the fracture network, along with solute exchange processes between the fractures and the porous matrix. We then conducted simulations for a hypothetical but workable SWIW test design and completed parameter sensitivity studies on three physical parameters of the rock matrix - namely porosity, diffusion coefficient, and retardation coefficient - in order to investigate their impact on the fracture-matrix solute exchange process. Hydraulic fracturing, or hydrofracking, is also modeled in this study, in two different ways: (1) by increasing the hydraulic aperture for flow in existing fractures and (2) by adding a new set of fractures to the field. The results of all these different tests are analyzed by studying the population of matrix blocks, the tracer spatial distribution, and the breakthrough curves (BTCs) obtained, while performing mass-balance checks and being careful to avoid some numerical mistakes that could occur. This study clearly demonstrates the importance of matrix effects in the solute transport process, with the sensitivity studies illustrating the increased importance of the matrix in providing a retardation mechanism for radionuclides as matrix porosity, diffusion coefficient, or retardation coefficient increase. Interestingly, model results before and after hydrofracking are insensitive to adding more fractures, while slightly more sensitive to aperture increase, making SWIW tests a possible means of discriminating between these two potential hydrofracking effects. Finally, we investigate the possibility of inferring relevant information regarding the fracture-matrix system physical parameters from the BTCs obtained during SWIW testing.« less

  6. PARAMESH: A Parallel Adaptive Mesh Refinement Community Toolkit

    NASA Technical Reports Server (NTRS)

    MacNeice, Peter; Olson, Kevin M.; Mobarry, Clark; deFainchtein, Rosalinda; Packer, Charles

    1999-01-01

    In this paper, we describe a community toolkit which is designed to provide parallel support with adaptive mesh capability for a large and important class of computational models, those using structured, logically cartesian meshes. The package of Fortran 90 subroutines, called PARAMESH, is designed to provide an application developer with an easy route to extend an existing serial code which uses a logically cartesian structured mesh into a parallel code with adaptive mesh refinement. Alternatively, in its simplest use, and with minimal effort, it can operate as a domain decomposition tool for users who want to parallelize their serial codes, but who do not wish to use adaptivity. The package can provide them with an incremental evolutionary path for their code, converting it first to uniformly refined parallel code, and then later if they so desire, adding adaptivity.

  7. 4D Light Field Imaging System Using Programmable Aperture

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam

    2012-01-01

    Complete depth information can be extracted from analyzing all angles of light rays emanated from a source. However, this angular information is lost in a typical 2D imaging system. In order to record this information, a standard stereo imaging system uses two cameras to obtain information from two view angles. Sometimes, more cameras are used to obtain information from more angles. However, a 4D light field imaging technique can achieve this multiple-camera effect through a single-lens camera. Two methods are available for this: one using a microlens array, and the other using a moving aperture. The moving-aperture method can obtain more complete stereo information. The existing literature suggests a modified liquid crystal panel [LC (liquid crystal) panel, similar to ones commonly used in the display industry] to achieve a moving aperture. However, LC panels cannot withstand harsh environments and are not qualified for spaceflight. In this regard, different hardware is proposed for the moving aperture. A digital micromirror device (DMD) will replace the liquid crystal. This will be qualified for harsh environments for the 4D light field imaging. This will enable an imager to record near-complete stereo information. The approach to building a proof-ofconcept is using existing, or slightly modified, off-the-shelf components. An SLR (single-lens reflex) lens system, which typically has a large aperture for fast imaging, will be modified. The lens system will be arranged so that DMD can be integrated. The shape of aperture will be programmed for single-viewpoint imaging, multiple-viewpoint imaging, and coded aperture imaging. The novelty lies in using a DMD instead of a LC panel to move the apertures for 4D light field imaging. The DMD uses reflecting mirrors, so any light transmission lost (which would be expected from the LC panel) will be minimal. Also, the MEMS-based DMD can withstand higher temperature and pressure fluctuation than a LC panel can. Robotics need near complete stereo images for their autonomous navigation, manipulation, and depth approximation. The imaging system can provide visual feedback

  8. Aperture Photometry Tool

    NASA Astrophysics Data System (ADS)

    Laher, Russ R.; Gorjian, Varoujan; Rebull, Luisa M.; Masci, Frank J.; Fowler, John W.; Helou, George; Kulkarni, Shrinivas R.; Law, Nicholas M.

    2012-07-01

    Aperture Photometry Tool (APT) is software for astronomers and students interested in manually exploring the photometric qualities of astronomical images. It is a graphical user interface (GUI) designed to allow the image data associated with aperture photometry calculations for point and extended sources to be visualized and, therefore, more effectively analyzed. The finely tuned layout of the GUI, along with judicious use of color-coding and alerting, is intended to give maximal user utility and convenience. Simply mouse-clicking on a source in the displayed image will instantly draw a circular or elliptical aperture and sky annulus around the source and will compute the source intensity and its uncertainty, along with several commonly used measures of the local sky background and its variability. The results are displayed and can be optionally saved to an aperture-photometry-table file and plotted on graphs in various ways using functions available in the software. APT is geared toward processing sources in a small number of images and is not suitable for bulk processing a large number of images, unlike other aperture photometry packages (e.g., SExtractor). However, APT does have a convenient source-list tool that enables calculations for a large number of detections in a given image. The source-list tool can be run either in automatic mode to generate an aperture photometry table quickly or in manual mode to permit inspection and adjustment of the calculation for each individual detection. APT displays a variety of useful graphs with just the push of a button, including image histogram, x and y aperture slices, source scatter plot, sky scatter plot, sky histogram, radial profile, curve of growth, and aperture-photometry-table scatter plots and histograms. APT has many functions for customizing the calculations, including outlier rejection, pixel “picking” and “zapping,” and a selection of source and sky models. The radial-profile-interpolation source model, which is accessed via the radial-profile-plot panel, allows recovery of source intensity from pixels with missing data and can be especially beneficial in crowded fields.

  9. The laboratory demonstration and signal processing of the inverse synthetic aperture imaging ladar

    NASA Astrophysics Data System (ADS)

    Gao, Si; Zhang, ZengHui; Xu, XianWen; Yu, WenXian

    2017-10-01

    This paper presents a coherent inverse synthetic-aperture imaging ladar(ISAL)system to obtain high resolution images. A balanced coherent optics system in laboratory is built with binary phase coded modulation transmit waveform which is different from conventional chirp. A whole digital signal processing solution is proposed including both quality phase gradient autofocus(QPGA) algorithm and cubic phase function(CPF) algorithm. Some high-resolution well-focused ISAL images of retro-reflecting targets are shown to validate the concepts. It is shown that high resolution images can be achieved and the influences from vibrations of platform involving targets and radar can be automatically compensated by the distinctive laboratory system and digital signal process.

  10. Advanced communications payload for mobile applications

    NASA Technical Reports Server (NTRS)

    Ames, S. A.; Kwan, R. K.

    1990-01-01

    An advanced satellite payload is proposed for single hop linking of mobile terminals of all classes as well as Very Small Aperture Terminal's (VSAT's). It relies on an intensive use of communications on-board processing and beam hopping for efficient link design to maximize capacity and a large satellite antenna aperture and high satellite transmitter power to minimize the cost of the ground terminals. Intersatellite links are used to improve the link quality and for high capacity relay. Power budgets are presented for links between the satellite and mobile, VSAT, and hub terminals. Defeating the effects of shadowing and fading requires the use of differentially coherent demodulation, concatenated forward error correction coding, and interleaving, all on a single link basis.

  11. Tunable-focus lens for adaptive eyeglasses

    PubMed Central

    Hasan, Nazmul; Banerjee, Aishwaryadev; Kim, Hanseup; Mastrangelo, Carlos H.

    2017-01-01

    We demonstrate the implementation of a compact tunable-focus liquid lens suitable for adaptive eyeglass application. The lens has an aperture diameter of 32 mm, optical power range of 5.6 diopter, and electrical power consumption less than 20 mW. The lens inclusive of its piezoelectric actuation mechanism is 8.4 mm thick and weighs 14.4 gm. The measured lens RMS wavefront aberration error was between 0.73 µm and 0.956 µm. PMID:28158006

  12. High resolution earth observation from geostationary orbit by optical aperture synthesys

    NASA Astrophysics Data System (ADS)

    Mesrine, M.; Thomas, E.; Garin, S.; Blanc, P.; Alis, C.; Cassaing, F.; Laubier, D.

    2017-11-01

    In this paper, we describe Optical Aperture Synthesis (OAS) imaging instrument concepts studied by Alcatel Alenia Space under a CNES R&T contract in term of technical feasibility. First, the methodology to select the aperture configuration is proposed, based on the definition and quantification of image quality criteria adapted to an OAS instrument for direct imaging of extended objects. The following section presents, for each interferometer type (Michelson and Fizeau), the corresponding optical configurations compatible with a large field of view from GEO orbit. These optical concepts take into account the constraints imposed by the foreseen resolution and the implementation of the co-phasing functions. The fourth section is dedicated to the analysis of the co-phasing methodologies, from the configuration deployment to the fine stabilization during observation. Finally, we present a trade-off analysis allowing to select the concept wrt mission specification and constraints related to instrument accommodation under launcher shroud and in-orbit deployment.

  13. Analysis of fratricide effect observed with GeMS and its relevance for large aperture astronomical telescopes

    NASA Astrophysics Data System (ADS)

    Otarola, Angel; Neichel, Benoit; Wang, Lianqi; Boyer, Corinne; Ellerbroek, Brent; Rigaut, François

    2013-12-01

    Large aperture ground-based telescopes require Adaptive Optics (AO) to correct for the distortions induced by atmospheric turbulence and achieve diffraction limited imaging quality. These AO systems rely on Natural and Laser Guide Stars (NGS and LGS) to provide the information required to measure the wavefront from the astronomical sources under observation. In particular one such LGS method consists in creating an artificial star by means of fluorescence of the sodium atoms at the altitude of the Earth's mesosphere. This is achieved by propagating one or more lasers, at the wavelength of the Na D2a resonance, from the telescope up to the mesosphere. Lasers can be launched from either behind the secondary mirror or from the perimeter of the main aperture. The so-called central- and side-launch systems, respectively. The central-launch system, while helpful to reduce the LGS spot elongation, introduces the so-called "fratricide" effect. This consists of an increase in the photon-noise in the AO Wave Front Sensors (WFS) sub-apertures, with photons that are the result of laser photons back-scattering from atmospheric molecules (Rayleigh scattering) and atmospheric aerosols (dust and/or cirrus clouds ice particles). This affects the performance of the algorithms intended to compute the LGS centroids and subsequently compute and correct the turbulence-induced wavefront distortions. In the frame of the Thirty Meter Telescope (TMT) project and using actual LGS WFS data obtained with the Gemini Multi-Conjugate Adaptive Optics System (Gemini MCAO a.k.a. GeMS), we show results from an analysis of the temporal variability of the observed fratricide effect, as well as comparison of the absolute magnitude of fratricide photon-flux level with simulations using models that account for molecular (Rayleigh) scattering and photons backscattered from cirrus clouds.

  14. Chromatic energy filter and characterization of laser-accelerated proton beams for particle therapy

    NASA Astrophysics Data System (ADS)

    Hofmann, Ingo; Meyer-ter-Vehn, Jürgen; Yan, Xueqing; Al-Omari, Husam

    2012-07-01

    The application of laser accelerated protons or ions for particle therapy has to cope with relatively large energy and angular spreads as well as possibly significant random fluctuations. We suggest a method for combined focusing and energy selection, which is an effective alternative to the commonly considered dispersive energy selection by magnetic dipoles. Our method is based on the chromatic effect of a magnetic solenoid (or any other energy dependent focusing device) in combination with an aperture to select a certain energy width defined by the aperture radius. It is applied to an initial 6D phase space distribution of protons following the simulation output from a Radiation Pressure Acceleration model. Analytical formula for the selection aperture and chromatic emittance are confirmed by simulation results using the TRACEWIN code. The energy selection is supported by properly placed scattering targets to remove the imprint of the chromatic effect on the beam and to enable well-controlled and shot-to-shot reproducible energy and transverse density profiles.

  15. Pseudoinverse Decoding Process in Delay-Encoded Synthetic Transmit Aperture Imaging.

    PubMed

    Gong, Ping; Kolios, Michael C; Xu, Yuan

    2016-09-01

    Recently, we proposed a new method to improve the signal-to-noise ratio of the prebeamformed radio-frequency data in synthetic transmit aperture (STA) imaging: the delay-encoded STA (DE-STA) imaging. In the decoding process of DE-STA, the equivalent STA data were obtained by directly inverting the coding matrix. This is usually regarded as an ill-posed problem, especially under high noise levels. Pseudoinverse (PI) is usually used instead for seeking a more stable inversion process. In this paper, we apply singular value decomposition to the coding matrix to conduct the PI. Our numerical studies demonstrate that the singular values of the coding matrix have a special distribution, i.e., all the values are the same except for the first and last ones. We compare the PI in two cases: complete PI (CPI), where all the singular values are kept, and truncated PI (TPI), where the last and smallest singular value is ignored. The PI (both CPI and TPI) DE-STA processes are tested against noise with both numerical simulations and experiments. The CPI and TPI can restore the signals stably, and the noise mainly affects the prebeamformed signals corresponding to the first transmit channel. The difference in the overall enveloped beamformed image qualities between the CPI and TPI is negligible. Thus, it demonstrates that DE-STA is a relatively stable encoding and decoding technique. Also, according to the special distribution of the singular values of the coding matrix, we propose a new efficient decoding formula that is based on the conjugate transpose of the coding matrix. We also compare the computational complexity of the direct inverse and the new formula.

  16. Data Intensive Systems (DIS) Benchmark Performance Summary

    DTIC Science & Technology

    2003-08-01

    models assumed by today’s conventional architectures. Such applications include model- based Automatic Target Recognition (ATR), synthetic aperture...radar (SAR) codes, large scale dynamic databases/battlefield integration, dynamic sensor- based processing, high-speed cryptanalysis, high speed...distributed interactive and data intensive simulations, data-oriented problems characterized by pointer- based and other highly irregular data structures

  17. 3D numerical simulations of negative hydrogen ion extraction using realistic plasma parameters, geometry of the extraction aperture and full 3D magnetic field map

    NASA Astrophysics Data System (ADS)

    Mochalskyy, S.; Wünderlich, D.; Ruf, B.; Franzen, P.; Fantz, U.; Minea, T.

    2014-02-01

    Decreasing the co-extracted electron current while simultaneously keeping negative ion (NI) current sufficiently high is a crucial issue on the development plasma source system for ITER Neutral Beam Injector. To support finding the best extraction conditions the 3D Particle-in-Cell Monte Carlo Collision electrostatic code ONIX (Orsay Negative Ion eXtraction) has been developed. Close collaboration with experiments and other numerical models allows performing realistic simulations with relevant input parameters: plasma properties, geometry of the extraction aperture, full 3D magnetic field map, etc. For the first time ONIX has been benchmarked with commercial positive ions tracing code KOBRA3D. A very good agreement in terms of the meniscus position and depth has been found. Simulation of NI extraction with different e/NI ratio in bulk plasma shows high relevance of the direct negative ion extraction from the surface produced NI in order to obtain extracted NI current as in the experimental results from BATMAN testbed.

  18. Multi-diversity combining and selection for relay-assisted mixed RF/FSO system

    NASA Astrophysics Data System (ADS)

    Chen, Li; Wang, Weidong

    2017-12-01

    We propose and analyze multi-diversity combining and selection to enhance the performance of relay-assisted mixed radio frequency/free-space optics (RF/FSO) system. We focus on a practical scenario for cellular network where a single-antenna source is communicating to a multi-apertures destination through a relay equipped with multiple receive antennas and multiple transmit apertures. The RF single input multiple output (SIMO) links employ either maximal-ratio combining (MRC) or receive antenna selection (RAS), and the FSO multiple input multiple output (MIMO) links adopt either repetition coding (RC) or transmit laser selection (TLS). The performance is evaluated via an outage probability analysis over Rayleigh fading RF links and Gamma-Gamma atmospheric turbulence FSO links with pointing errors where channel state information (CSI) assisted amplify-and-forward (AF) scheme is considered. Asymptotic closed-form expressions at high signal-to-noise ratio (SNR) are also derived. Coding gain and diversity order for different combining and selection schemes are further discussed. Numerical results are provided to verify and illustrate the analytical results.

  19. Coded aperture coherent scatter spectral imaging for assessment of breast cancers: an ex-vivo demonstration

    NASA Astrophysics Data System (ADS)

    Spencer, James R.; Carter, Joshua E.; Leung, Crystal K.; McCall, Shannon J.; Greenberg, Joel A.; Kapadia, Anuj J.

    2017-03-01

    A Coded Aperture Coherent Scatter Spectral Imaging (CACSSI) system was developed in our group to differentiate cancer and healthy tissue in the breast. The utility of the experimental system was previously demonstrated using anthropomorphic breast phantoms and breast biopsy specimens. Here we demonstrate CACSSI utility in identifying tumor margins in real time using breast lumpectomy specimens. Fresh lumpectomy specimens were obtained from Surgical Pathology with the suspected cancerous area designated on the specimen. The specimens were scanned using CACSSI to obtain spectral scatter signatures at multiple locations within the tumor and surrounding tissue. The spectral reconstructions were matched with literature form-factors to classify the tissue as cancerous or non-cancerous. The findings were then compared against pathology reports to confirm the presence and location of the tumor. The system was found to be capable of consistently differentiating cancerous and healthy regions in the breast with spatial resolution of 5 mm. Tissue classification results from the scanned specimens could be correlated with pathology results. We now aim to develop CACSSI as a clinical imaging tool to aid breast cancer assessment and other diagnostic purposes.

  20. Modification of the short straight sections of the high energy booster of the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M.; Johnson, D.; Kocur, P.

    1993-05-01

    The tracking analysis with the High Energy Booster (HEB) of the Superconducting Super Collider (SSC) indicated that the machine dynamic aperture for the current lattice (Rev 0 lattice) was limited by the quadrupoles in the short straight sections. A new lattice, Rev 1, with modified short straight sections was proposed. The results of tracking the two lattices up to 5 [times] 10[sup 5] turns (20 seconds at the injection energy) with various random seeds are presented in this paper. The new lattice has increased dynamic aperture from [approximately]7 mm to [approximately]8 mm, increases the abort kicker effectiveness, and eliminates onemore » family (length) of main quadrupoles. The code DIMAD was used for matching the new short straight sections to the ring. The code TEAPOT was used for the short term tracking and to create a machine file, zfile, which could in turn be used to generate a one-turn map with the ZLIB for fast long-term tracking using a symplectic one-turn map tracking program ZIMAPTRK.« less

  1. Modification of the short straight sections of the high energy booster of the SSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, M.; Johnson, D.; Kocur, P.

    1993-05-01

    The tracking analysis with the High Energy Booster (HEB) of the Superconducting Super Collider (SSC) indicated that the machine dynamic aperture for the current lattice (Rev 0 lattice) was limited by the quadrupoles in the short straight sections. A new lattice, Rev 1, with modified short straight sections was proposed. The results of tracking the two lattices up to 5 {times} 10{sup 5} turns (20 seconds at the injection energy) with various random seeds are presented in this paper. The new lattice has increased dynamic aperture from {approximately}7 mm to {approximately}8 mm, increases the abort kicker effectiveness, and eliminates onemore » family (length) of main quadrupoles. The code DIMAD was used for matching the new short straight sections to the ring. The code TEAPOT was used for the short term tracking and to create a machine file, zfile, which could in turn be used to generate a one-turn map with the ZLIB for fast long-term tracking using a symplectic one-turn map tracking program ZIMAPTRK.« less

  2. Coding Strategies and Implementations of Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Tsai, Tsung-Han

    This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others. This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system. Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity. Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  3. Perceptually-Based Adaptive JPEG Coding

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.; Rosenholtz, Ruth; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    An extension to the JPEG standard (ISO/IEC DIS 10918-3) allows spatial adaptive coding of still images. As with baseline JPEG coding, one quantization matrix applies to an entire image channel, but in addition the user may specify a multiplier for each 8 x 8 block, which multiplies the quantization matrix, yielding the new matrix for the block. MPEG 1 and 2 use much the same scheme, except there the multiplier changes only on macroblock boundaries. We propose a method for perceptual optimization of the set of multipliers. We compute the perceptual error for each block based upon DCT quantization error adjusted according to contrast sensitivity, light adaptation, and contrast masking, and pick the set of multipliers which yield maximally flat perceptual error over the blocks of the image. We investigate the bitrate savings due to this adaptive coding scheme and the relative importance of the different sorts of masking on adaptive coding.

  4. Testing and Performance Analysis of the Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    Soni, Nitin J.

    1996-01-01

    This report provides the test results and performance analysis of the multichannel error correction code decoder (MED) system for a regenerative satellite with asynchronous, frequency-division multiple access (FDMA) uplink channels. It discusses the system performance relative to various critical parameters: the coding length, data pattern, unique word value, unique word threshold, and adjacent-channel interference. Testing was performed under laboratory conditions and used a computer control interface with specifically developed control software to vary these parameters. Needed technologies - the high-speed Bose Chaudhuri-Hocquenghem (BCH) codec from Harris Corporation and the TRW multichannel demultiplexer/demodulator (MCDD) - were fully integrated into the mesh very small aperture terminal (VSAT) onboard processing architecture and were demonstrated.

  5. SU-F-I-53: Coded Aperture Coherent Scatter Spectral Imaging of the Breast: A Monte Carlo Evaluation of Absorbed Dose

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, R; Lakshmanan, M; Fong, G

    Purpose: Coherent scatter based imaging has shown improved contrast and molecular specificity over conventional digital mammography however the biological risks have not been quantified due to a lack of accurate information on absorbed dose. This study intends to characterize the dose distribution and average glandular dose from coded aperture coherent scatter spectral imaging of the breast. The dose deposited in the breast from this new diagnostic imaging modality has not yet been quantitatively evaluated. Here, various digitized anthropomorphic phantoms are tested in a Monte Carlo simulation to evaluate the absorbed dose distribution and average glandular dose using clinically feasible scanmore » protocols. Methods: Geant4 Monte Carlo radiation transport simulation software is used to replicate the coded aperture coherent scatter spectral imaging system. Energy sensitive, photon counting detectors are used to characterize the x-ray beam spectra for various imaging protocols. This input spectra is cross-validated with the results from XSPECT, a commercially available application that yields x-ray tube specific spectra for the operating parameters employed. XSPECT is also used to determine the appropriate number of photons emitted per mAs of tube current at a given kVp tube potential. With the implementation of the XCAT digital anthropomorphic breast phantom library, a variety of breast sizes with differing anatomical structure are evaluated. Simulations were performed with and without compression of the breast for dose comparison. Results: Through the Monte Carlo evaluation of a diverse population of breast types imaged under real-world scan conditions, a clinically relevant average glandular dose for this new imaging modality is extrapolated. Conclusion: With access to the physical coherent scatter imaging system used in the simulation, the results of this Monte Carlo study may be used to directly influence the future development of the modality to keep breast dose to a minimum while still maintaining clinically viable image quality.« less

  6. Three-Dimensional Terahertz Coded-Aperture Imaging Based on Matched Filtering and Convolutional Neural Network.

    PubMed

    Chen, Shuo; Luo, Chenggao; Wang, Hongqiang; Deng, Bin; Cheng, Yongqiang; Zhuang, Zhaowen

    2018-04-26

    As a promising radar imaging technique, terahertz coded-aperture imaging (TCAI) can achieve high-resolution, forward-looking, and staring imaging by producing spatiotemporal independent signals with coded apertures. However, there are still two problems in three-dimensional (3D) TCAI. Firstly, the large-scale reference-signal matrix based on meshing the 3D imaging area creates a heavy computational burden, thus leading to unsatisfactory efficiency. Secondly, it is difficult to resolve the target under low signal-to-noise ratio (SNR). In this paper, we propose a 3D imaging method based on matched filtering (MF) and convolutional neural network (CNN), which can reduce the computational burden and achieve high-resolution imaging for low SNR targets. In terms of the frequency-hopping (FH) signal, the original echo is processed with MF. By extracting the processed echo in different spike pulses separately, targets in different imaging planes are reconstructed simultaneously to decompose the global computational complexity, and then are synthesized together to reconstruct the 3D target. Based on the conventional TCAI model, we deduce and build a new TCAI model based on MF. Furthermore, the convolutional neural network (CNN) is designed to teach the MF-TCAI how to reconstruct the low SNR target better. The experimental results demonstrate that the MF-TCAI achieves impressive performance on imaging ability and efficiency under low SNR. Moreover, the MF-TCAI has learned to better resolve the low-SNR 3D target with the help of CNN. In summary, the proposed 3D TCAI can achieve: (1) low-SNR high-resolution imaging by using MF; (2) efficient 3D imaging by downsizing the large-scale reference-signal matrix; and (3) intelligent imaging with CNN. Therefore, the TCAI based on MF and CNN has great potential in applications such as security screening, nondestructive detection, medical diagnosis, etc.

  7. A finite element boundary integral formulation for radiation and scattering by cavity antennas using tetrahedral elements

    NASA Technical Reports Server (NTRS)

    Gong, J.; Volakis, J. L.; Chatterjee, A.; Jin, J. M.

    1992-01-01

    A hybrid finite element boundary integral formulation is developed using tetrahedral and/or triangular elements for discretizing the cavity and/or aperture of microstrip antenna arrays. The tetrahedral elements with edge based linear expansion functions are chosen for modeling the volume region and triangular elements are used for discretizing the aperture. The edge based expansion functions are divergenceless thus removing the requirement to introduce a penalty term and the tetrahedral elements permit greater geometrical adaptability than the rectangular bricks. The underlying theory and resulting expressions are discussed in detail together with some numerical scattering examples for comparison and demonstration.

  8. Membrane adaptive optics

    NASA Astrophysics Data System (ADS)

    Marker, Dan K.; Wilkes, James M.; Ruggiero, Eric J.; Inman, Daniel J.

    2005-08-01

    An innovative adaptive optic is discussed that provides a range of capabilities unavailable with either existing, or newly reported, research devices. It is believed that this device will be inexpensive and uncomplicated to construct and operate, with a large correction range that should dramatically relax the static and dynamic structural tolerances of a telescope. As the areal density of a telescope primary is reduced, the optimal optical figure and the structural stiffness are inherently compromised and this phenomenon will require a responsive, range-enhanced wavefront corrector. In addition to correcting for the aberrations in such innovative primary mirrors, sufficient throw remains to provide non-mechanical steering to dramatically improve the Field of regard. Time dependent changes such as thermal disturbances can also be accommodated. The proposed adaptive optic will overcome some of the issues facing conventional deformable mirrors, as well as current and proposed MEMS-based deformable mirrors and liquid crystal based adaptive optics. Such a device is scalable to meter diameter apertures, eliminates high actuation voltages with minimal power consumption, provides long throw optical path correction, provides polychromatic dispersion free operation, dramatically reduces the effects of adjacent actuator influence, and provides a nearly 100% useful aperture. This article will reveal top-level details of the proposed construction and include portions of a static, dynamic, and residual aberration analysis. This device will enable certain designs previously conceived by visionaries in the optical community.

  9. Advanced Imaging Optics Utilizing Wavefront Coding.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scrymgeour, David; Boye, Robert; Adelsberger, Kathleen

    2015-06-01

    Image processing offers a potential to simplify an optical system by shifting some of the imaging burden from lenses to the more cost effective electronics. Wavefront coding using a cubic phase plate combined with image processing can extend the system's depth of focus, reducing many of the focus-related aberrations as well as material related chromatic aberrations. However, the optimal design process and physical limitations of wavefront coding systems with respect to first-order optical parameters and noise are not well documented. We examined image quality of simulated and experimental wavefront coded images before and after reconstruction in the presence of noise.more » Challenges in the implementation of cubic phase in an optical system are discussed. In particular, we found that limitations must be placed on system noise, aperture, field of view and bandwidth to develop a robust wavefront coded system.« less

  10. Adaptive variable-length coding for efficient compression of spacecraft television data.

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Plaunt, J. R.

    1971-01-01

    An adaptive variable length coding system is presented. Although developed primarily for the proposed Grand Tour missions, many features of this system clearly indicate a much wider applicability. Using sample to sample prediction, the coding system produces output rates within 0.25 bit/picture element (pixel) of the one-dimensional difference entropy for entropy values ranging from 0 to 8 bit/pixel. This is accomplished without the necessity of storing any code words. Performance improvements of 0.5 bit/pixel can be simply achieved by utilizing previous line correlation. A Basic Compressor, using concatenated codes, adapts to rapid changes in source statistics by automatically selecting one of three codes to use for each block of 21 pixels. The system adapts to less frequent, but more dramatic, changes in source statistics by adjusting the mode in which the Basic Compressor operates on a line-to-line basis. Furthermore, the compression system is independent of the quantization requirements of the pulse-code modulation system.

  11. Dynamic properties of the adaptive optics system depending on the temporary transformations of mirror control voltages

    NASA Astrophysics Data System (ADS)

    Lavrinov, V. V.; Lavrinova, L. N.

    2017-11-01

    The statistically optimal control algorithm for the correcting mirror is formed by constructing a prediction of distortions of the optical signal and improves the time resolution of the adaptive optics system. The prediction of distortions is based on an analysis of the dynamics of changes in the optical inhomogeneities of the turbulent atmosphere or the evolution of phase fluctuations at the input aperture of the adaptive system. Dynamic properties of the system are manifested during the temporary transformation of the stresses controlling the mirror and are determined by the dynamic characteristics of the flexible mirror.

  12. Shaping electromagnetic waves using software-automatically-designed metasurfaces.

    PubMed

    Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie

    2017-06-15

    We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.

  13. Bats' avoidance of real and virtual objects: implications for the sonar coding of object size.

    PubMed

    Goerlitz, Holger R; Genzel, Daria; Wiegrebe, Lutz

    2012-01-01

    Fast movement in complex environments requires the controlled evasion of obstacles. Sonar-based obstacle evasion involves analysing the acoustic features of object-echoes (e.g., echo amplitude) that correlate with this object's physical features (e.g., object size). Here, we investigated sonar-based obstacle evasion in bats emerging in groups from their day roost. Using video-recordings, we first show that the bats evaded a small real object (ultrasonic loudspeaker) despite the familiar flight situation. Secondly, we studied the sonar coding of object size by adding a larger virtual object. The virtual object echo was generated by real-time convolution of the bats' calls with the acoustic impulse response of a large spherical disc and played from the loudspeaker. Contrary to the real object, the virtual object did not elicit evasive flight, despite the spectro-temporal similarity of real and virtual object echoes. Yet, their spatial echo features differ: virtual object echoes lack the spread of angles of incidence from which the echoes of large objects arrive at a bat's ears (sonar aperture). We hypothesise that this mismatch of spectro-temporal and spatial echo features caused the lack of virtual object evasion and suggest that the sonar aperture of object echoscapes contributes to the sonar coding of object size. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Controlled deterministic implantation by nanostencil lithography at the limit of ion-aperture straggling

    NASA Astrophysics Data System (ADS)

    Alves, A. D. C.; Newnham, J.; van Donkelaar, J. A.; Rubanov, S.; McCallum, J. C.; Jamieson, D. N.

    2013-04-01

    Solid state electronic devices fabricated in silicon employ many ion implantation steps in their fabrication. In nanoscale devices deterministic implants of dopant atoms with high spatial precision will be needed to overcome problems with statistical variations in device characteristics and to open new functionalities based on controlled quantum states of single atoms. However, to deterministically place a dopant atom with the required precision is a significant technological challenge. Here we address this challenge with a strategy based on stepped nanostencil lithography for the construction of arrays of single implanted atoms. We address the limit on spatial precision imposed by ion straggling in the nanostencil—fabricated with the readily available focused ion beam milling technique followed by Pt deposition. Two nanostencils have been fabricated; a 60 nm wide aperture in a 3 μm thick Si cantilever and a 30 nm wide aperture in a 200 nm thick Si3N4 membrane. The 30 nm wide aperture demonstrates the fabricating process for sub-50 nm apertures while the 60 nm aperture was characterized with 500 keV He+ ion forward scattering to measure the effect of ion straggling in the collimator and deduce a model for its internal structure using the GEANT4 ion transport code. This model is then applied to simulate collimation of a 14 keV P+ ion beam in a 200 nm thick Si3N4 membrane nanostencil suitable for the implantation of donors in silicon. We simulate collimating apertures with widths in the range of 10-50 nm because we expect the onset of J-coupling in a device with 30 nm donor spacing. We find that straggling in the nanostencil produces mis-located implanted ions with a probability between 0.001 and 0.08 depending on the internal collimator profile and the alignment with the beam direction. This result is favourable for the rapid prototyping of a proof-of-principle device containing multiple deterministically implanted dopants.

  15. Compact electrostatic beam optics for multi-element focused ion beams: simulation and experiments.

    PubMed

    Mathew, Jose V; Bhattacharjee, Sudeep

    2011-01-01

    Electrostatic beam optics for a multi-element focused ion beam (MEFIB) system comprising of a microwave multicusp plasma (ion) source is designed with the help of two widely known and commercially available beam simulation codes: AXCEL-INP and SIMION. The input parameters to the simulations are obtained from experiments carried out in the system. A single and a double Einzel lens system (ELS) with and without beam limiting apertures (S) have been investigated. For a 1 mm beam at the plasma electrode aperture, the rms emittance of the focused ion beam is found to reduce from ∼0.9 mm mrad for single ELS to ∼0.5 mm mrad for a double ELS, when S of 0.5 mm aperture size is employed. The emittance can be further improved to ∼0.1 mm mrad by maintaining S at ground potential, leading to reduction in beam spot size (∼10 μm). The double ELS design is optimized for different electrode geometrical parameters with tolerances of ±1 mm in electrode thickness, electrode aperture, inter electrode distance, and ±1° in electrode angle, providing a robust design. Experimental results obtained with the double ELS for the focused beam current and spot size, agree reasonably well with the simulations.

  16. Comparison between broadband Bessel beam launchers based on either Bessel or Hankel aperture distribution for millimeter wave short pulse generation.

    PubMed

    Pavone, Santi C; Mazzinghi, Agnese; Freni, Angelo; Albani, Matteo

    2017-08-07

    In this paper, a comparison is presented between Bessel beam launchers at millimeter waves based on either a cylindrical standing wave (CSW) or a cylindrical inward traveling wave (CITW) aperture distribution. It is theoretically shown that CITW launchers are better suited for the generation of electromagnetic short pulses because they maintain their performances over a larger bandwidth than those realizing a CSW aperture distribution. Moreover, the wavenumber dispersion of both the launchers is evaluated both theoretically and numerically. To this end, two planar Bessel beam launchers, one enforcing a CSW and the other enforcing a CITW aperture distribution, are designed at millimeter waves with a center operating frequency of f¯=60GHz and analyzed in the bandwidth 50 - 70 GHz by using an in-house developed numerical code to solve Maxwell's equations based on the method of moments. It is shown that a monochromatic Bessel beam can be efficiently generated by both the launchers over a wide fractional bandwidth. Finally, we investigate the generation of limited-diffractive electromagnetic pulses at millimeter waves, up to a certain non-diffractive range. Namely, it is shown that by feeding the launcher with a Gaussian short pulse, a spatially confined electromagnetic pulse can be efficiently generated in front of the launcher.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rottmann, J; Berbeco, R; Keall, P

    Purpose: To maximize normal tissue sparing for treatments requiring motion encompassing margins. Motion mitigation techniques including DMLC or couch tracking can freeze tumor motion within the treatment aperture potentially allowing for smaller treatment margins and thus better sparing of normal tissue. To enable for a safe application of this concept in the clinic we propose adapting margins dynamically in real-time during radiotherapy delivery based on personalized tumor localization confidence. To demonstrate technical feasibility we present a phantom study. Methods: We utilize a realistic anthropomorphic dynamic thorax phantom with a lung tumor model embedded close to the spine. The tumor, amore » 3D-printout of a patient's GTV, is moved 15mm peak-to-peak by diaphragm compression and monitored by continuous EPID imaging in real-time. Two treatment apertures are created for each beam, one representing ITV -based and the other GTV-based margin expansion. A soft tissue localization (STiL) algorithm utilizing the continuous EPID images is employed to freeze tumor motion within the treatment aperture by means of DMLC tracking. Depending on a tracking confidence measure (TCM), the treatment aperture is adjusted between the ITV and the GTV leaf. Results: We successfully demonstrate real-time personalized margin adjustment in a phantom study. We measured a system latency of about 250 ms which we compensated by utilizing a respiratory motion prediction algorithm (ridge regression). With prediction in place we observe tracking accuracies better than 1mm. For TCM=0 (as during startup) an ITV-based treatment aperture is chosen, for TCM=1 a GTV-based aperture and for 0« less

  18. Differential expression and emerging functions of non-coding RNAs in cold adaptation.

    PubMed

    Frigault, Jacques J; Morin, Mathieu D; Morin, Pier Jr

    2017-01-01

    Several species undergo substantial physiological and biochemical changes to confront the harsh conditions associated with winter. Small mammalian hibernators and cold-hardy insects are examples of natural models of cold adaptation that have been amply explored. While the molecular picture associated with cold adaptation has started to become clearer in recent years, notably through the use of high-throughput experimental approaches, the underlying cold-associated functions attributed to several non-coding RNAs, including microRNAs (miRNAs) and long non-coding RNAs (lncRNAs), remain to be better characterized. Nevertheless, key pioneering work has provided clues on the likely relevance of these molecules in cold adaptation. With an emphasis on mammalian hibernation and insect cold hardiness, this work first reviews various molecular changes documented so far in these processes. The cascades leading to miRNA and lncRNA production as well as the mechanisms of action of these non-coding RNAs are subsequently described. Finally, we present examples of differentially expressed non-coding RNAs in models of cold adaptation and elaborate on the potential significance of this modulation with respect to low-temperature adaptation.

  19. Statistical analysis of wavefront fluctuations from measurements of a wave-front sensor

    NASA Astrophysics Data System (ADS)

    Botygina, N. N.; Emaleev, O. N.; Konyaev, P. A.; Lukin, V. P.

    2017-11-01

    Measurements of the wave front aberrations at the input aperture of the Big Solar Vacuum Telescope (LSVT) were carried out by a wave-front sensor (WFS) of an adaptive optical system when the controlled deformable mirror was replaced by a plane one.

  20. Coded aperture imaging with uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    A system utilizing uniformly redundant arrays to image non-focusable radiation. The uniformly redundant array is used in conjunction with a balanced correlation technique to provide a system with no artifacts such that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. Additionally, the array is mosaicked to reduce required detector size over conventional array detectors.

  1. Coded aperture imaging with uniformly redundant arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1982-01-01

    A system utilizing uniformly redundant arrays to image non-focusable radiation. The uniformly redundant array is used in conjunction with a balanced correlation technique to provide a system with no artifacts such that virtually limitless signal-to-noise ratio is obtained with high transmission characteristics. Additionally, the array is mosaicked to reduce required detector size over conventional array detectors.

  2. Methods for increasing the sensitivity of gamma-ray imagers

    DOEpatents

    Mihailescu, Lucian [Pleasanton, CA; Vetter, Kai M [Alameda, CA; Chivers, Daniel H [Fremont, CA

    2012-02-07

    Methods are presented that increase the position resolution and granularity of double sided segmented semiconductor detectors. These methods increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.

  3. Systems for increasing the sensitivity of gamma-ray imagers

    DOEpatents

    Mihailescu, Lucian; Vetter, Kai M.; Chivers, Daniel H.

    2012-12-11

    Systems that increase the position resolution and granularity of double sided segmented semiconductor detectors are provided. These systems increase the imaging resolution capability of such detectors, either used as Compton cameras, or as position sensitive radiation detectors in imagers such as SPECT, PET, coded apertures, multi-pinhole imagers, or other spatial or temporal modulated imagers.

  4. 3D synthetic aperture for controlled-source electromagnetics

    NASA Astrophysics Data System (ADS)

    Knaak, Allison

    Locating hydrocarbon reservoirs has become more challenging with smaller, deeper or shallower targets in complicated environments. Controlled-source electromagnetics (CSEM), is a geophysical electromagnetic method used to detect and derisk hydrocarbon reservoirs in marine settings, but it is limited by the size of the target, low-spatial resolution, and depth of the reservoir. To reduce the impact of complicated settings and improve the detecting capabilities of CSEM, I apply synthetic aperture to CSEM responses, which virtually increases the length and width of the CSEM source by combining the responses from multiple individual sources. Applying a weight to each source steers or focuses the synthetic aperture source array in the inline and crossline directions. To evaluate the benefits of a 2D source distribution, I test steered synthetic aperture on 3D diffusive fields and view the changes with a new visualization technique. Then I apply 2D steered synthetic aperture to 3D noisy synthetic CSEM fields, which increases the detectability of the reservoir significantly. With more general weighting, I develop an optimization method to find the optimal weights for synthetic aperture arrays that adapts to the information in the CSEM data. The application of optimally weighted synthetic aperture to noisy, simulated electromagnetic fields reduces the presence of noise, increases detectability, and better defines the lateral extent of the target. I then modify the optimization method to include a term that minimizes the variance of random, independent noise. With the application of the modified optimization method, the weighted synthetic aperture responses amplifies the anomaly from the reservoir, lowers the noise floor, and reduces noise streaks in noisy CSEM responses from sources offset kilometers from the receivers. Even with changes to the location of the reservoir and perturbations to the physical properties, synthetic aperture is still able to highlight targets correctly, which allows use of the method in locations where the subsurface models are built from only estimates. In addition to the technical work in this thesis, I explore the interface between science, government, and society by examining the controversy over hydraulic fracturing and by suggesting a process to aid the debate and possibly other future controversies.

  5. Rigorous study of low-complexity adaptive space-time block-coded MIMO receivers in high-speed mode multiplexed fiber-optic transmission links using few-mode fibers

    NASA Astrophysics Data System (ADS)

    Weng, Yi; He, Xuan; Wang, Junyi; Pan, Zhongqi

    2017-01-01

    Spatial-division multiplexing (SDM) techniques have been purposed to increase the capacity of optical fiber transmission links by utilizing multicore fibers or few-mode fibers (FMF). The most challenging impairments of SDMbased long-haul optical links mainly include modal dispersion and mode-dependent loss (MDL), whereas MDL arises from inline component imperfections, and breaks modal orthogonality thus degrading the capacity of multiple-inputmultiple- output (MIMO) receivers. To reduce MDL, optical approaches include mode scramblers and specialty fiber designs, yet these methods were burdened with high cost, yet cannot completely remove the accumulated MDL in the link. Besides, space-time trellis codes (STTC) were purposed to lessen MDL, but suffered from high complexity. In this work, we investigated the performance of space-time block-coding (STBC) scheme to mitigate MDL in SDM-based optical communication by exploiting space and delay diversity, whereas weight matrices of frequency-domain equalization (FDE) were updated heuristically using decision-directed recursive-least-squares (RLS) algorithm for convergence and channel estimation. The STBC was evaluated in a six-mode multiplexed system over 30-km FMF via 6×6 MIMO FDE, with modal gain offset 3 dB, core refractive index 1.49, numerical aperture 0.5. Results show that optical-signal-to-noise ratio (OSNR) tolerance can be improved via STBC by approximately 3.1, 4.9, 7.8 dB for QPSK, 16- and 64-QAM with respective bit-error-rates (BER) and minimum-mean-square-error (MMSE). Besides, we also evaluate the complexity optimization of STBC decoding scheme with zero-forcing decision feedback (ZFDF) equalizer by shortening the coding slot length, which is robust to frequency-selective fading channels, and can be scaled up for SDM systems with more dynamic channels.

  6. Was Wright Right? The Canonical Genetic Code is an Empirical Example of an Adaptive Peak in Nature; Deviant Genetic Codes Evolved Using Adaptive Bridges

    PubMed Central

    2010-01-01

    The canonical genetic code is on a sub-optimal adaptive peak with respect to its ability to minimize errors, and is close to, but not quite, optimal. This is demonstrated by the near-total adjacency of synonymous codons, the similarity of adjacent codons, and comparisons of frequency of amino acid usage with number of codons in the code for each amino acid. As a rare empirical example of an adaptive peak in nature, it shows adaptive peaks are real, not merely theoretical. The evolution of deviant genetic codes illustrates how populations move from a lower to a higher adaptive peak. This is done by the use of “adaptive bridges,” neutral pathways that cross over maladaptive valleys by virtue of masking of the phenotypic expression of some maladaptive aspects in the genotype. This appears to be the general mechanism by which populations travel from one adaptive peak to another. There are multiple routes a population can follow to cross from one adaptive peak to another. These routes vary in the probability that they will be used, and this probability is determined by the number and nature of the mutations that happen along each of the routes. A modification of the depiction of adaptive landscapes showing genetic distances and probabilities of travel along their multiple possible routes would throw light on this important concept. PMID:20711776

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strauss, H.R.

    This paper describes the code FEMHD, an adaptive finite element MHD code, which is applied in a number of different manners to model MHD behavior and edge plasma phenomena on a diverted tokamak. The code uses an unstructured triangular mesh in 2D and wedge shaped mesh elements in 3D. The code has been adapted to look at neutral and charged particle dynamics in the plasma scrape off region, and into a full MHD-particle code.

  8. Optical components of adaptive systems for improving laser beam quality

    NASA Astrophysics Data System (ADS)

    Malakhov, Yuri I.; Atuchin, Victor V.; Kudryashov, Aleksis V.; Starikov, Fedor A.

    2008-10-01

    The short overview is given of optical equipment developed within the ISTC activity for adaptive systems of new generation allowing for correction of high-power laser beams carrying optical vortices onto the phase surface. They are the kinoform many-level optical elements of new generation, namely, special spiral phase plates and ordered rasters of microlenses, i.e. lenslet arrays, as well as the wide-aperture Hartmann-Shack sensors and bimorph deformable piezoceramics- based mirrors with various grids of control elements.

  9. Synthetic aperture radar signal data compression using block adaptive quantization

    NASA Technical Reports Server (NTRS)

    Kuduvalli, Gopinath; Dutkiewicz, Melanie; Cumming, Ian

    1994-01-01

    This paper describes the design and testing of an on-board SAR signal data compression algorithm for ESA's ENVISAT satellite. The Block Adaptive Quantization (BAQ) algorithm was selected, and optimized for the various operational modes of the ASAR instrument. A flexible BAQ scheme was developed which allows a selection of compression ratio/image quality trade-offs. Test results show the high quality of the SAR images processed from the reconstructed signal data, and the feasibility of on-board implementation using a single ASIC.

  10. Superwide-angle coverage code-multiplexed optical scanner.

    PubMed

    Riza, Nabeel A; Arain, Muzammil A

    2004-05-01

    A superwide-angle coverage code-multiplexed optical scanner is presented that has the potential to provide 4 pi-sr coverage. As a proof-of-concept experiment, an angular scan range of 288 degrees for six randomly distributed beams is demonstrated. The proposed scanner achieves its superwide coverage by exploiting a combination of phase-encoded transmission and reflection holography within an in-line hologram recording-retrieval geometry. The basic scanner unit consists of one phase-only digital mode spatial light modulator for code entry (i.e., beam scan control) and a holographic material from which we obtained what we believe is the first-of-a-kind extremely wide coverage, low component count, high speed (e.g., microsecond domain), and large aperture (e.g., > 1-cm diameter) scanner.

  11. JLIFE: THE JEFFERSON LAB INTERACTIVE FRONT END FOR THE OPTICAL PROPAGATION CODE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watson, Anne M.; Shinn, Michelle D.

    2013-08-01

    We present details on a graphical interface for the open source software program Optical Propagation Code, or OPC. This interface, written in Java, allows a user with no knowledge of OPC to create an optical system, with lenses, mirrors, apertures, etc. and the appropriate drifts between them. The Java code creates the appropriate Perl script that serves as the input for OPC. The mode profile is then output at each optical element. The display can be either an intensity profile along the x axis, or as an isometric 3D plot which can be tilted and rotated. These profiles can bemore » saved. Examples of the input and output will be presented.« less

  12. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (IBM PC VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    Mutual Coupling Program for Circular Waveguide-fed Aperture Array (CWG) was developed to calculate the electromagnetic interaction between elements of an antenna array of circular apertures with specified aperture field distributions. The field distributions were assumed to be a superposition of the modes which could exist in a circular waveguide. Various external media were included to provide flexibility of use, for example, the flexibility to determine the effects of dielectric covers (i.e., thermal protection system tiles) upon the impedance of aperture type antennas. The impedance and radiation characteristics of planar array antennas depend upon the mutual interaction between all the elements of the array. These interactions are influenced by several parameters (e.g., the array grid geometry, the geometry and excitation of each array element, the medium outside the array, and the internal network feeding the array.) For the class of array antenna whose radiating elements consist of small holes in a flat conducting plate, the electromagnetic problem can be divided into two parts, the internal and the external. In solving the external problem for an array of circular apertures, CWG will compute the mutual interaction between various combinations of circular modal distributions and apertures. CWG computes the mutual coupling between various modes assumed to exist in circular apertures that are located in a flat conducting plane of infinite dimensions. The apertures can radiate into free space, a homogeneous medium, a multilayered region or a reflecting surface. These apertures are assumed to be excited by one or more modes corresponding to the modal distributions in circular waveguides of the same cross sections as the apertures. The apertures may be of different sizes and also of different polarizations. However, the program assumes that each aperture field contains the same modal distributions, and calculates the complex scattering matrix between all mode and aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  13. CWG - MUTUAL COUPLING PROGRAM FOR CIRCULAR WAVEGUIDE-FED APERTURE ARRAY (VAX VMS VERSION)

    NASA Technical Reports Server (NTRS)

    Bailey, M. C.

    1994-01-01

    Mutual Coupling Program for Circular Waveguide-fed Aperture Array (CWG) was developed to calculate the electromagnetic interaction between elements of an antenna array of circular apertures with specified aperture field distributions. The field distributions were assumed to be a superposition of the modes which could exist in a circular waveguide. Various external media were included to provide flexibility of use, for example, the flexibility to determine the effects of dielectric covers (i.e., thermal protection system tiles) upon the impedance of aperture type antennas. The impedance and radiation characteristics of planar array antennas depend upon the mutual interaction between all the elements of the array. These interactions are influenced by several parameters (e.g., the array grid geometry, the geometry and excitation of each array element, the medium outside the array, and the internal network feeding the array.) For the class of array antenna whose radiating elements consist of small holes in a flat conducting plate, the electromagnetic problem can be divided into two parts, the internal and the external. In solving the external problem for an array of circular apertures, CWG will compute the mutual interaction between various combinations of circular modal distributions and apertures. CWG computes the mutual coupling between various modes assumed to exist in circular apertures that are located in a flat conducting plane of infinite dimensions. The apertures can radiate into free space, a homogeneous medium, a multilayered region or a reflecting surface. These apertures are assumed to be excited by one or more modes corresponding to the modal distributions in circular waveguides of the same cross sections as the apertures. The apertures may be of different sizes and also of different polarizations. However, the program assumes that each aperture field contains the same modal distributions, and calculates the complex scattering matrix between all mode and aperture combinations. The scattering matrix can then be used to determine the complex modal field amplitudes for each aperture with a specified array excitation. CWG is written in VAX FORTRAN for DEC VAX series computers running VMS (LAR-15236) and IBM PC series and compatible computers running MS-DOS (LAR-15226). It requires 360K of RAM for execution. To compile the source code for the PC version, the NDP Fortran compiler and linker will be required; however, the distribution medium for the PC version of CWG includes a sample MS-DOS executable which was created using NDP Fortran with the -vms compiler option. The standard distribution medium for the PC version of CWG is a 3.5 inch 1.44Mb MS-DOS format diskette. The standard distribution medium for the VAX version of CWG is a 1600 BPI 9track magnetic tape in DEC VAX BACKUP format. The VAX version is also available on a TK50 tape cartridge in DEC VAX BACKUP format. Both machine versions of CWG include an electronic version of the documentation in Microsoft Word for Windows format. CWG was developed in 1993 and is a copyrighted work with all copyright vested in NASA.

  14. Quality Scalability Aware Watermarking for Visual Content.

    PubMed

    Bhowmik, Deepayan; Abhayaratne, Charith

    2016-11-01

    Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.

  15. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  16. Pseudo-color coding method for high-dynamic single-polarization SAR images

    NASA Astrophysics Data System (ADS)

    Feng, Zicheng; Liu, Xiaolin; Pei, Bingzhi

    2018-04-01

    A raw synthetic aperture radar (SAR) image usually has a 16-bit or higher bit depth, which cannot be directly visualized on 8-bit displays. In this study, we propose a pseudo-color coding method for high-dynamic singlepolarization SAR images. The method considers the characteristics of both SAR images and human perception. In HSI (hue, saturation and intensity) color space, the method carries out high-dynamic range tone mapping and pseudo-color processing simultaneously in order to avoid loss of details and to improve object identifiability. It is a highly efficient global algorithm.

  17. Optical advantages of astigmatic aberration corrected heliostats

    NASA Astrophysics Data System (ADS)

    van Rooyen, De Wet; Schöttl, Peter; Bern, Gregor; Heimsath, Anna; Nitz, Peter

    2016-05-01

    Astigmatic aberration corrected heliostats adapt their shape in dependence of the incidence angle of the sun on the heliostat. Simulations show that this optical correction leads to a higher concentration ratio at the target and thus in a decrease in required receiver aperture in particular for smaller heliostat fields.

  18. Star adaptation for two-algorithms used on serial computers

    NASA Technical Reports Server (NTRS)

    Howser, L. M.; Lambiotte, J. J., Jr.

    1974-01-01

    Two representative algorithms used on a serial computer and presently executed on the Control Data Corporation 6000 computer were adapted to execute efficiently on the Control Data STAR-100 computer. Gaussian elimination for the solution of simultaneous linear equations and the Gauss-Legendre quadrature formula for the approximation of an integral are the two algorithms discussed. A description is given of how the programs were adapted for STAR and why these adaptations were necessary to obtain an efficient STAR program. Some points to consider when adapting an algorithm for STAR are discussed. Program listings of the 6000 version coded in 6000 FORTRAN, the adapted STAR version coded in 6000 FORTRAN, and the STAR version coded in STAR FORTRAN are presented in the appendices.

  19. A case for Sandia investment in complex adaptive systems science and technology.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colbaugh, Richard; Tsao, Jeffrey Yeenien; Johnson, Curtis Martin

    2012-05-01

    This white paper makes a case for Sandia National Laboratories investments in complex adaptive systems science and technology (S&T) -- investments that could enable higher-value-added and more-robustly-engineered solutions to challenges of importance to Sandia's national security mission and to the nation. Complex adaptive systems are ubiquitous in Sandia's national security mission areas. We often ignore the adaptive complexity of these systems by narrowing our 'aperture of concern' to systems or subsystems with a limited range of function exposed to a limited range of environments over limited periods of time. But by widening our aperture of concern we could increase ourmore » impact considerably. To do so, the science and technology of complex adaptive systems must mature considerably. Despite an explosion of interest outside of Sandia, however, that science and technology is still in its youth. What has been missing is contact with real (rather than model) systems and real domain-area detail. With its center-of-gravity as an engineering laboratory, Sandia's has made considerable progress applying existing science and technology to real complex adaptive systems. It has focused much less, however, on advancing the science and technology itself. But its close contact with real systems and real domain-area detail represents a powerful strength with which to help complex adaptive systems science and technology mature. Sandia is thus both a prime beneficiary of, as well as potentially a prime contributor to, complex adaptive systems science and technology. Building a productive program in complex adaptive systems science and technology at Sandia will not be trivial, but a credible path can be envisioned: in the short run, continue to apply existing science and technology to real domain-area complex adaptive systems; in the medium run, jump-start the creation of new science and technology capability through Sandia's Laboratory Directed Research and Development program; and in the long run, inculcate an awareness at the Department of Energy of the importance of supporting complex adaptive systems science through its Office of Science.« less

  20. Adaptive format conversion for scalable video coding

    NASA Astrophysics Data System (ADS)

    Wan, Wade K.; Lim, Jae S.

    2001-12-01

    The enhancement layer in many scalable coding algorithms is composed of residual coding information. There is another type of information that can be transmitted instead of (or in addition to) residual coding. Since the encoder has access to the original sequence, it can utilize adaptive format conversion (AFC) to generate the enhancement layer and transmit the different format conversion methods as enhancement data. This paper investigates the use of adaptive format conversion information as enhancement data in scalable video coding. Experimental results are shown for a wide range of base layer qualities and enhancement bitrates to determine when AFC can improve video scalability. Since the parameters needed for AFC are small compared to residual coding, AFC can provide video scalability at low enhancement layer bitrates that are not possible with residual coding. In addition, AFC can also be used in addition to residual coding to improve video scalability at higher enhancement layer bitrates. Adaptive format conversion has not been studied in detail, but many scalable applications may benefit from it. An example of an application that AFC is well-suited for is the migration path for digital television where AFC can provide immediate video scalability as well as assist future migrations.

  1. Adapter plate assembly for adjustable mounting of objects

    DOEpatents

    Blackburn, R.S.

    1986-05-02

    An adapter plate and two locking discs are together affixed to an optic table with machine screws or bolts threaded into a fixed array of internally threaded holes provided in the table surface. The adapter plate preferably has two, and preferably parallel, elongated locating slots each freely receiving a portion of one of the locking discs for secure affixation of the adapter plate to the optic table. A plurality of threaded apertures provided in the adapter plate are available to attach optical mounts or other devices onto the adapter plate in an orientation not limited by the disposition of the array of threaded holes in the table surface. An axially aligned but radially offset hole through each locking disc receives a screw that tightens onto the table, such that prior to tightening of the screw the locking disc may rotate and translate within each locating slot of the adapter plate for maximum flexibility of the orientation thereof.

  2. Adapter plate assembly for adjustable mounting of objects

    DOEpatents

    Blackburn, Robert S.

    1987-01-01

    An adapter plate and two locking discs are together affixed to an optic table with machine screws or bolts threaded into a fixed array of internally threaded holes provided in the table surface. The adapter plate preferably has two, and preferably parallel, elongated locating slots each freely receiving a portion of one of the locking discs for secure affixation of the adapter plate to the optic table. A plurality of threaded apertures provided in the adapter plate are available to attach optical mounts or other devices onto the adapter plate in an orientation not limited by the disposition of the array of threaded holes in the table surface. An axially aligned but radially offset hole through each locking disc receives a screw that tightens onto the table, such that prior to tightening of the screw the locking disc may rotate and translate within each locating slot of the adapter plate for maximum flexibility of the orientation thereof.

  3. Awareness Becomes Necessary Between Adaptive Pattern Coding of Open and Closed Curvatures

    PubMed Central

    Sweeny, Timothy D.; Grabowecky, Marcia; Suzuki, Satoru

    2012-01-01

    Visual pattern processing becomes increasingly complex along the ventral pathway, from the low-level coding of local orientation in the primary visual cortex to the high-level coding of face identity in temporal visual areas. Previous research using pattern aftereffects as a psychophysical tool to measure activation of adaptive feature coding has suggested that awareness is relatively unimportant for the coding of orientation, but awareness is crucial for the coding of face identity. We investigated where along the ventral visual pathway awareness becomes crucial for pattern coding. Monoptic masking, which interferes with neural spiking activity in low-level processing while preserving awareness of the adaptor, eliminated open-curvature aftereffects but preserved closed-curvature aftereffects. In contrast, dichoptic masking, which spares spiking activity in low-level processing while wiping out awareness, preserved open-curvature aftereffects but eliminated closed-curvature aftereffects. This double dissociation suggests that adaptive coding of open and closed curvatures straddles the divide between weakly and strongly awareness-dependent pattern coding. PMID:21690314

  4. Results of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) Experiment

    NASA Technical Reports Server (NTRS)

    Wilson, K. E.; Leatherman, P. R.; Cleis, R.; Spinhirne, J.; Fugate, R. Q.

    1997-01-01

    Adaptive optics techniques can be used to realize a robust low bit-error-rate link by mitigating the atmosphere-induced signal fades in optical communications links between ground-based transmitters and deep-space probes. Phase I of the Compensated Earth-Moon-Earth Retroreflector Laser Link (CEMERLL) experiment demonstrated the first propagation of an atmosphere-compensated laser beam to the lunar retroreflectors. A 1.06-micron Nd:YAG laser beam was propagated through the full aperture of the 1.5-m telescope at the Starfire Optical Range (SOR), Kirtland Air Force Base, New Mexico, to the Apollo 15 retroreflector array at Hadley Rille. Laser guide-star adaptive optics were used to compensate turbulence-induced aberrations across the transmitter's 1.5-m aperture. A 3.5-m telescope, also located at the SOR, was used as a receiver for detecting the return signals. JPL-supplied Chebyshev polynomials of the retroreflector locations were used to develop tracking algorithms for the telescopes. At times we observed in excess of 100 photons returned from a single pulse when the outgoing beam from the 1.5-m telescope was corrected by the adaptive optics system. No returns were detected when the outgoing beam was uncompensated. The experiment was conducted from March through September 1994, during the first or last quarter of the Moon.

  5. Visual coding of human bodies: perceptual aftereffects reveal norm-based, opponent coding of body identity.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J

    2013-04-01

    Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this aftereffect increased with adaptor extremity, as predicted by norm-based, opponent coding of body identity. A size change between adapt and test bodies minimized the effects of low-level, retinotopic adaptation. These results demonstrate that body identity, like face identity, is opponent coded in higher-level vision. More generally, they show that a norm-based multidimensional framework, which is well established for face perception, may provide a powerful framework for understanding body perception.

  6. Spatially variant apodization for squinted synthetic aperture radar images.

    PubMed

    Castillo-Rubio, Carlos F; Llorente-Romano, Sergio; Burgos-García, Mateo

    2007-08-01

    Spatially variant apodization (SVA) is a nonlinear sidelobe reduction technique that improves sidelobe level and preserves resolution at the same time. This method implements a bidimensional finite impulse response filter with adaptive taps depending on image information. Some papers that have been previously published analyze SVA at the Nyquist rate or at higher rates focused on strip synthetic aperture radar (SAR). This paper shows that traditional SVA techniques are useless when the sensor operates with a squint angle. The reasons for this behaviour are analyzed, and a new implementation that largely improves the results is presented. The algorithm is applied to simulated SAR images in order to demonstrate the good quality achieved along with efficient computation.

  7. Reflective baffle for BepiColombo mission

    NASA Astrophysics Data System (ADS)

    Rugi-Grond, E.; Weigel, T.; Herren, A.; Dominguez Calvo, M.; Krähenbühl, U.; Mouricaud, D.; Vayssade, H.

    2017-11-01

    The BepiColombo Spacecraft can't tolerate to absorb a major fraction of the off-axis sunlight through larger payload apertures. Fortunately, there are solutions to design baffles such that they reflect the incoming radiation back through the front aperture rather than absorbing it. A Design Study, sponsored by ESA and performed by Contraves Space together with SAGEM Défense Securité, has analysed the potential of various solutions and assessed the options to manufacture them. The selected configuration has been analysed in detail for the optical, mechanical and thermal performance as well as the impact on mass and power dissipation. The size of the baffle was adapted to the needs of the BepiColombo Laser Altimeter (BELA) payload.

  8. Enhanced attention amplifies face adaptation.

    PubMed

    Rhodes, Gillian; Jeffery, Linda; Evangelista, Emma; Ewing, Louise; Peters, Marianne; Taylor, Libby

    2011-08-15

    Perceptual adaptation not only produces striking perceptual aftereffects, but also enhances coding efficiency and discrimination by calibrating coding mechanisms to prevailing inputs. Attention to simple stimuli increases adaptation, potentially enhancing its functional benefits. Here we show that attention also increases adaptation to faces. In Experiment 1, face identity aftereffects increased when attention to adapting faces was increased using a change detection task. In Experiment 2, figural (distortion) face aftereffects increased when attention was increased using a snap game (detecting immediate repeats) during adaptation. Both were large effects. Contributions of low-level adaptation were reduced using free viewing (both experiments) and a size change between adapt and test faces (Experiment 2). We suggest that attention may enhance adaptation throughout the entire cortical visual pathway, with functional benefits well beyond the immediate advantages of selective processing of potentially important stimuli. These results highlight the potential to facilitate adaptive updating of face-coding mechanisms by strategic deployment of attentional resources. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Single Carrier with Frequency Domain Equalization for Synthetic Aperture Underwater Acoustic Communications

    PubMed Central

    He, Chengbing; Xi, Rui; Wang, Han; Jing, Lianyou; Shi, Wentao; Zhang, Qunfei

    2017-01-01

    Phase-coherent underwater acoustic (UWA) communication systems typically employ multiple hydrophones in the receiver to achieve spatial diversity gain. However, small underwater platforms can only carry a single transducer which can not provide spatial diversity gain. In this paper, we propose single-carrier with frequency domain equalization (SC-FDE) for phase-coherent synthetic aperture acoustic communications in which a virtual array is generated by the relative motion between the transmitter and the receiver. This paper presents synthetic aperture acoustic communication results using SC-FDE through data collected during a lake experiment in January 2016. The performance of two receiver algorithms is analyzed and compared, including the frequency domain equalizer (FDE) and the hybrid time frequency domain equalizer (HTFDE). The distances between the transmitter and the receiver in the experiment were about 5 km. The bit error rate (BER) and output signal-to-noise ratio (SNR) performances with different receiver elements and transmission numbers were presented. After combining multiple transmissions, error-free reception using a convolution code with a data rate of 8 kbps was demonstrated. PMID:28684683

  10. Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation

    NASA Astrophysics Data System (ADS)

    Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb

    2011-07-01

    We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ˜550m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection.

  11. Facilities for High Resolution Imaging of the Sun

    NASA Astrophysics Data System (ADS)

    von der Lühe, Oskar

    2018-04-01

    The Sun is the only star where physical processes can be observed at their intrinsic spatial scales. Even though the Sun in a mere 150 million km from Earth, it is difficult to resolve fundamental processes in the solar atmosphere, because they occur at scales of the order of the kilometer. They can be observed only with telescopes which have apertures of several meters. The current state-of-the-art are solar telescopes with apertures of 1.5 m which resolve 50 km on the solar surface, soon to be superseded by telescopes with 4 m apertures with 20 km resolution. The US American 4 m DSI Solar Telescope is currently constructed on Maui, Hawaii, and is expected to have first light in 2020. The European solar community collaborates intensively to pursue the 4 m European Solar Telescope with a construction start in the Canaries early in the next decade. Solar telescopes with slightly smaller are also in the planning by the Russian, Indian and Chinese communities. In order to achieve a resolution which approaches the diffraction limit, all modern solar telescopes use adaptive optics which compensates virtually any scene on the solar disk. Multi-conjugate adaptive optics designed to compensate fields of the order on one minute of arc have been demonstrated and will become a facility feature of the new telescopes. The requirements for high precision spectro-polarimetry – about one part in 104 – makes continuous monitoring of (MC)AO performance and post-processing image reconstruction methods a necessity.

  12. New Millenium Inflatable Structures Technology

    NASA Technical Reports Server (NTRS)

    Mollerick, Ralph

    1997-01-01

    Specific applications where inflatable technology can enable or enhance future space missions are tabulated. The applicability of the inflatable technology to large aperture infra-red astronomy missions is discussed. Space flight validation and risk reduction are emphasized along with the importance of analytical tools in deriving structurally sound concepts and performing optimizations using compatible codes. Deployment dynamics control, fabrication techniques, and system testing are addressed.

  13. Studies of auroral X-ray imaging from high altitude spacecraft

    NASA Technical Reports Server (NTRS)

    Mckenzie, D. L.; Mizera, P. F.; Rice, C. J.

    1980-01-01

    Results of a study of techniques for imaging the aurora from a high altitude satellite at X-ray wavelengths are summarized. The X-ray observations allow the straightforward derivation of the primary auroral X-ray spectrum and can be made at all local times, day and night. Five candidate imaging systems are identified: X-ray telescope, multiple pinhole camera, coded aperture, rastered collimator, and imaging collimator. Examples of each are specified, subject to common weight and size limits which allow them to be intercompared. The imaging ability of each system is tested using a wide variety of sample spectra which are based on previous satellite observations. The study shows that the pinhole camera and coded aperture are both good auroral imaging systems. The two collimated detectors are significantly less sensitive. The X-ray telescope provides better image quality than the other systems in almost all cases, but a limitation to energies below about 4 keV prevents this system from providing the spectra data essential to deriving electron spectra, energy input to the atmosphere, and atmospheric densities and conductivities. The orbit selection requires a tradeoff between spatial resolution and duty cycle.

  14. A novel bit-wise adaptable entropy coding technique

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is adaptable in that each bit to be encoded may have an associated probability esitmate which depends on previously encoded bits. The technique may have advantages over arithmetic coding. The technique can achieve arbitrarily small redundancy and admits a simple and fast decoder.

  15. Flowfield computer graphics

    NASA Technical Reports Server (NTRS)

    Desautel, Richard

    1993-01-01

    The objectives of this research include supporting the Aerothermodynamics Branch's research by developing graphical visualization tools for both the branch's adaptive grid code and flow field ray tracing code. The completed research for the reporting period includes development of a graphical user interface (GUI) and its implementation into the NAS Flowfield Analysis Software Tool kit (FAST), for both the adaptive grid code (SAGE) and the flow field ray tracing code (CISS).

  16. A Comparative Study of Automated Infrasound Detectors - PMCC and AFD with Analyst Review.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Junghyun; Hayward, Chris; Zeiler, Cleat

    Automated detections calculated by the progressive multi-channel correlation (PMCC) method (Cansi, 1995) and the adaptive F detector (AFD) (Arrowsmith et al., 2009) are compared to the signals identified by five independent analysts. Each detector was applied to a four-hour time sequence recorded by the Korean infrasound array CHNAR. This array was used because it is composed of both small (<100 m) and large (~1000 m) aperture element spacing. The four hour time sequence contained a number of easily identified signals under noise conditions that have average RMS amplitudes varied from 1.2 to 4.5 mPa (1 to 5 Hz), estimated withmore » running five-minute window. The effectiveness of the detectors was estimated for the small aperture, large aperture, small aperture combined with the large aperture, and full array. The full and combined arrays performed the best for AFD under all noise conditions while the large aperture array had the poorest performance for both detectors. PMCC produced similar results as AFD under the lower noise conditions, but did not produce as dramatic an increase in detections using the full and combined arrays. Both automated detectors and the analysts produced a decrease in detections under the higher noise conditions. Comparing the detection probabilities with Estimated Receiver Operating Characteristic (EROC) curves we found that the smaller value of consistency for PMCC and the larger p-value for AFD had the highest detection probability. These parameters produced greater changes in detection probability than estimates of the false alarm rate. The detection probability was impacted the most by noise level, with low noise (average RMS amplitude of 1.7 mPa) having an average detection probability of ~40% and high noise (average RMS amplitude of 2.9 mPa) average detection probability of ~23%.« less

  17. Development of Monte Carlo based real-time treatment planning system with fast calculation algorithm for boron neutron capture therapy.

    PubMed

    Takada, Kenta; Kumada, Hiroaki; Liem, Peng Hong; Sakurai, Hideyuki; Sakae, Takeji

    2016-12-01

    We simulated the effect of patient displacement on organ doses in boron neutron capture therapy (BNCT). In addition, we developed a faster calculation algorithm (NCT high-speed) to simulate irradiation more efficiently. We simulated dose evaluation for the standard irradiation position (reference position) using a head phantom. Cases were assumed where the patient body is shifted in lateral directions compared to the reference position, as well as in the direction away from the irradiation aperture. For three groups of neutron (thermal, epithermal, and fast), flux distribution using NCT high-speed with a voxelized homogeneous phantom was calculated. The three groups of neutron fluxes were calculated for the same conditions with Monte Carlo code. These calculated results were compared. In the evaluations of body movements, there were no significant differences even with shifting up to 9mm in the lateral directions. However, the dose decreased by about 10% with shifts of 9mm in a direction away from the irradiation aperture. When comparing both calculations in the phantom surface up to 3cm, the maximum differences between the fluxes calculated by NCT high-speed with those calculated by Monte Carlo code for thermal neutrons and epithermal neutrons were 10% and 18%, respectively. The time required for NCT high-speed code was about 1/10th compared to Monte Carlo calculation. In the evaluation, the longitudinal displacement has a considerable effect on the organ doses. We also achieved faster calculation of depth distribution of thermal neutron flux using NCT high-speed calculation code. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  18. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  19. Local statistics adaptive entropy coding method for the improvement of H.26L VLC coding

    NASA Astrophysics Data System (ADS)

    Yoo, Kook-yeol; Kim, Jong D.; Choi, Byung-Sun; Lee, Yung Lyul

    2000-05-01

    In this paper, we propose an adaptive entropy coding method to improve the VLC coding efficiency of H.26L TML-1 codec. First of all, we will show that the VLC coding presented in TML-1 does not satisfy the sibling property of entropy coding. Then, we will modify the coding method into the local statistics adaptive one to satisfy the property. The proposed method based on the local symbol statistics dynamically changes the mapping relationship between symbol and bit pattern in the VLC table according to sibling property. Note that the codewords in the VLC table of TML-1 codec is not changed. Since this changed mapping relationship also derived in the decoder side by using the decoded symbols, the proposed VLC coding method does not require any overhead information. The simulation results show that the proposed method gives about 30% and 37% reduction in average bit rate for MB type and CBP information, respectively.

  20. Coded-aperture Compton camera for gamma-ray imaging

    NASA Astrophysics Data System (ADS)

    Farber, Aaron M.

    This dissertation describes the development of a novel gamma-ray imaging system concept and presents results from Monte Carlo simulations of the new design. Current designs for large field-of-view gamma cameras suitable for homeland security applications implement either a coded aperture or a Compton scattering geometry to image a gamma-ray source. Both of these systems require large, expensive position-sensitive detectors in order to work effectively. By combining characteristics of both of these systems, a new design can be implemented that does not require such expensive detectors and that can be scaled down to a portable size. This new system has significant promise in homeland security, astronomy, botany and other fields, while future iterations may prove useful in medical imaging, other biological sciences and other areas, such as non-destructive testing. A proof-of-principle study of the new gamma-ray imaging system has been performed by Monte Carlo simulation. Various reconstruction methods have been explored and compared. General-Purpose Graphics-Processor-Unit (GPGPU) computation has also been incorporated. The resulting code is a primary design tool for exploring variables such as detector spacing, material selection and thickness and pixel geometry. The advancement of the system from a simple 1-dimensional simulation to a full 3-dimensional model is described. Methods of image reconstruction are discussed and results of simulations consisting of both a 4 x 4 and a 16 x 16 object space mesh have been presented. A discussion of the limitations and potential areas of further study is also presented.

  1. Adaptive Transmission and Channel Modeling for Frequency Hopping Communications

    DTIC Science & Technology

    2009-09-21

    proposed adaptive transmission method has much greater system capacity than conventional non-adaptive MC direct- sequence ( DS )- CDMA system. • We...several mobile radio systems. First, a new improved allocation algorithm was proposed for multicarrier code-division multiple access (MC- CDMA ) system...Multicarrier code-division multiple access (MC- CDMA ) system with adaptive frequency hopping (AFH) has attracted attention of researchers due to its

  2. Study on Extremizing Adaptive Systems and Applications to Synthetic Aperture Radars.

    DTIC Science & Technology

    1983-05-01

    Air Force Office of Scientific Research/NL Bolling Air Force Base. DC 20332 ’,, , ..... -.. .. -.. -.. .. . . - . - - -. .. jjTVI E ()y T1-.’! Nt1 AL...This project was motivated by A. H. Klopf’s insightful observation and proposition on the functioning of the neuron cell and the nervous system in

  3. New perspective on single-radiator multiple-port antennas for adaptive beamforming applications.

    PubMed

    Byun, Gangil; Choo, Hosung

    2017-01-01

    One of the most challenging problems in recent antenna engineering fields is to achieve highly reliable beamforming capabilities in an extremely restricted space of small handheld devices. In this paper, we introduce a new perspective on single-radiator multiple-port (SRMP) antenna to alter the traditional approach of multiple-antenna arrays for improving beamforming performances with reduced aperture sizes. The major contribution of this paper is to demonstrate the beamforming capability of the SRMP antenna for use as an extremely miniaturized front-end component in more sophisticated beamforming applications. To examine the beamforming capability, the radiation properties and the array factor of the SRMP antenna are theoretically formulated for electromagnetic characterization and are used as complex weights to form adaptive array patterns. Then, its fundamental performance limits are rigorously explored through enumerative studies by varying the dielectric constant of the substrate, and field tests are conducted using a beamforming hardware to confirm the feasibility. The results demonstrate that the new perspective of the SRMP antenna allows for improved beamforming performances with the ability of maintaining consistently smaller aperture sizes compared to the traditional multiple-antenna arrays.

  4. Fundamental Parameters Line Profile Fitting in Laboratory Diffractometers

    PubMed Central

    Cheary, R. W.; Coelho, A. A.; Cline, J. P.

    2004-01-01

    The fundamental parameters approach to line profile fitting uses physically based models to generate the line profile shapes. Fundamental parameters profile fitting (FPPF) has been used to synthesize and fit data from both parallel beam and divergent beam diffractometers. The refined parameters are determined by the diffractometer configuration. In a divergent beam diffractometer these include the angular aperture of the divergence slit, the width and axial length of the receiving slit, the angular apertures of the axial Soller slits, the length and projected width of the x-ray source, the absorption coefficient and axial length of the sample. In a parallel beam system the principal parameters are the angular aperture of the equatorial analyser/Soller slits and the angular apertures of the axial Soller slits. The presence of a monochromator in the beam path is normally accommodated by modifying the wavelength spectrum and/or by changing one or more of the axial divergence parameters. Flat analyzer crystals have been incorporated into FPPF as a Lorentzian shaped angular acceptance function. One of the intrinsic benefits of the fundamental parameters approach is its adaptability any laboratory diffractometer. Good fits can normally be obtained over the whole 20 range without refinement using the known properties of the diffractometer, such as the slit sizes and diffractometer radius, and emission profile. PMID:27366594

  5. GENERAL CONTROL NONREPRESSIBLE4 Degrades 14-3-3 and the RIN4 Complex to Regulate Stomatal Aperture with Implications on Nonhost Disease Resistance and Drought Tolerance[OPEN

    PubMed Central

    Oh, Sunhee; Lee, Hee-Kyung; Rojas, Clemencia M.

    2017-01-01

    Plants have complex and adaptive innate immune responses against pathogen infections. Stomata are key entry points for many plant pathogens. Both pathogens and plants regulate stomatal aperture for pathogen entry and defense, respectively. Not all plant proteins involved in stomatal aperture regulation have been identified. Here, we report GENERAL CONTROL NONREPRESSIBLE4 (GCN4), an AAA+-ATPase family protein, as one of the key proteins regulating stomatal aperture during biotic and abiotic stress. Silencing of GCN4 in Nicotiana benthamiana and Arabidopsis thaliana compromises host and nonhost disease resistance due to open stomata during pathogen infection. AtGCN4 overexpression plants have reduced H+-ATPase activity, stomata that are less responsive to pathogen virulence factors such as coronatine (phytotoxin produced by the bacterium Pseudomonas syringae) or fusicoccin (a fungal toxin produced by the fungus Fusicoccum amygdali), reduced pathogen entry, and enhanced drought tolerance. This study also demonstrates that AtGCN4 interacts with RIN4 and 14-3-3 proteins and suggests that GCN4 degrades RIN4 and 14-3-3 proteins via a proteasome-mediated pathway and thereby reduces the activity of the plasma membrane H+-ATPase complex, thus reducing proton pump activity to close stomata. PMID:28855332

  6. Laser beam propagation through turbulence and adaptive optics for beam delivery improvement

    NASA Astrophysics Data System (ADS)

    Nicolas, Stephane

    2015-10-01

    We report results from numerical simulations of laser beam propagation through atmospheric turbulence. In particular, we study the statistical variations of the fractional beam energy hitting inside an optical aperture placed at several kilometer distance. The simulations are performed for different turbulence conditions and engagement ranges, with and without the use of turbulence mitigation. Turbulence mitigation is simulated with phase conjugation. The energy fluctuations are deduced from time sequence realizations. It is shown that turbulence mitigation leads to an increase of the mean energy inside the aperture and decrease of the fluctuations even in strong turbulence conditions and long distance engagement. As an example, the results are applied to a high energy laser countermeasure system, where we determine the probability that a single laser pulse, or one of the pulses in a sequence, will provide a lethal energy inside the target aperture. Again, turbulence mitigation contributes to increase the performance of the system at long-distance and for strong turbulence conditions in terms of kill probability. We also discuss a specific case where turbulence contributes to increase the pulse energy within the target aperture. The present analysis can be used to evaluate the performance of a variety of systems, such as directed countermeasures, laser communication, and laser weapons.

  7. Adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope

    NASA Astrophysics Data System (ADS)

    Ma, Haotong; Hu, Haojun; Xie, Wenke; Zhao, Haichuan; Xu, Xiaojun; Chen, Jinbao

    2013-08-01

    We demonstrate the adaptive beam shaping for improving the power coupling of a two-Cassegrain-telescope based on the stochastic parallel gradient descent (SPGD) algorithm and dual phase only liquid crystal spatial light modulators (LC-SLMs). Adaptive pre-compensation the wavefront of projected laser beam at the transmitter telescope is chosen to improve the power coupling efficiency. One phase only LC-SLM adaptively optimizes phase distribution of the projected laser beam and the other generates turbulence phase screen. The intensity distributions of the dark hollow beam after passing through the turbulent atmosphere with and without adaptive beam shaping are analyzed in detail. The influence of propagation distance and aperture size of the Cassegrain-telescope on coupling efficiency are investigated theoretically and experimentally. These studies show that the power coupling can be significantly improved by adaptive beam shaping. The technique can be used in optical communication, deep space optical communication and relay mirror.

  8. Low complexity Reed-Solomon-based low-density parity-check design for software defined optical transmission system based on adaptive puncturing decoding algorithm

    NASA Astrophysics Data System (ADS)

    Pan, Xiaolong; Liu, Bo; Zheng, Jianglong; Tian, Qinghua

    2016-08-01

    We propose and demonstrate a low complexity Reed-Solomon-based low-density parity-check (RS-LDPC) code with adaptive puncturing decoding algorithm for elastic optical transmission system. Partial received codes and the relevant column in parity-check matrix can be punctured to reduce the calculation complexity by adaptive parity-check matrix during decoding process. The results show that the complexity of the proposed decoding algorithm is reduced by 30% compared with the regular RS-LDPC system. The optimized code rate of the RS-LDPC code can be obtained after five times iteration.

  9. Two way time transfer results at NRL and USNO

    NASA Technical Reports Server (NTRS)

    Galysh, Ivan J.; Landis, G. Paul

    1993-01-01

    The Naval Research Laboratory (NRL) has developed a two way time transfer modem system for the United States Naval Observatory (USNO). Two modems in conjunction with a pair of Very Small Aperture Terminal (VSAT) and a communication satellite can achieve sub nanosecond time transfer. This performance is demonstrated by the results of testing at and between NRL and USNO. The modems use Code Division Multiple Access (CDMA) methods to separate their signals through a single path in the satellite. Each modem transmitted a different Pseudo Random Noise (PRN) code and received the others PRN code. High precision time transfer is possible with two way methods because of reciprocity of many of the terms of the path and hardware delay between the two modems. The hardware description was given in a previous paper.

  10. First on-sky demonstration of the piezoelectric adaptive secondary mirror.

    PubMed

    Guo, Youming; Zhang, Ang; Fan, Xinlong; Rao, Changhui; Wei, Ling; Xian, Hao; Wei, Kai; Zhang, Xiaojun; Guan, Chunlin; Li, Min; Zhou, Luchun; Jin, Kai; Zhang, Junbo; Deng, Jijiang; Zhou, Longfeng; Chen, Hao; Zhang, Xuejun; Zhang, Yudong

    2016-12-15

    We propose using a piezoelectric adaptive secondary mirror (PASM) in the medium-sized adaptive telescopes with a 2-4 m aperture for structure and control simplification by utilizing the piezoelectric actuators in contrast with the voice-coil adaptive secondary mirror. A closed-loop experimental setup was built for on-sky demonstration of the 73-element PASM developed by our laboratory. In this Letter, the PASM and the closed-loop adaptive optics system are introduced. High-resolution stellar images were obtained by using the PASM to correct high-order wavefront errors in May 2016. To the best of our knowledge, this is the first successful on-sky demonstration of the PASM. The results show that with the PASM as the deformable mirror, the angular resolution of the 1.8 m telescope can be effectively improved.

  11. Adapting the coping in deliberation (CODE) framework: a multi-method approach in the context of familial ovarian cancer risk management.

    PubMed

    Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate

    2014-11-01

    To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  12. Reducing Speckle In One-Look SAR Images

    NASA Technical Reports Server (NTRS)

    Nathan, K. S.; Curlander, J. C.

    1990-01-01

    Local-adaptive-filter algorithm incorporated into digital processing of synthetic-aperture-radar (SAR) echo data to reduce speckle in resulting imagery. Involves use of image statistics in vicinity of each picture element, in conjunction with original intensity of element, to estimate brightness more nearly proportional to true radar reflectance of corresponding target. Increases ratio of signal to speckle noise without substantial degradation of resolution common to multilook SAR images. Adapts to local variations of statistics within scene, preserving subtle details. Computationally simple. Lends itself to parallel processing of different segments of image, making possible increased throughput.

  13. High resolution observations using adaptive optics: Achievements and future needs

    NASA Astrophysics Data System (ADS)

    Sankarasubramanian, K.; Rimmele, T.

    2008-06-01

    Over the last few years, several interesting observations were obtained with the help of solar Adaptive Optics (AO). In this paper, few observations made using the solar AO are enlightened and briefly discussed. A list of disadvantages with the current AO system are presented. With telescopes larger than 1.5 m expected during the next decade, there is a need to develop the existing AO technologies for large aperture telescopes. Some aspects of this development are highlighted. Finally, the recent AO developments in India are also presented.

  14. First benchmark of the Unstructured Grid Adaptation Working Group

    NASA Technical Reports Server (NTRS)

    Ibanez, Daniel; Barral, Nicolas; Krakos, Joshua; Loseille, Adrien; Michal, Todd; Park, Mike

    2017-01-01

    Unstructured grid adaptation is a technology that holds the potential to improve the automation and accuracy of computational fluid dynamics and other computational disciplines. Difficulty producing the highly anisotropic elements necessary for simulation on complex curved geometries that satisfies a resolution request has limited this technology's widespread adoption. The Unstructured Grid Adaptation Working Group is an open gathering of researchers working on adapting simplicial meshes to conform to a metric field. Current members span a wide range of institutions including academia, industry, and national laboratories. The purpose of this group is to create a common basis for understanding and improving mesh adaptation. We present our first major contribution: a common set of benchmark cases, including input meshes and analytic metric specifications, that are publicly available to be used for evaluating any mesh adaptation code. We also present the results of several existing codes on these benchmark cases, to illustrate their utility in identifying key challenges common to all codes and important differences between available codes. Future directions are defined to expand this benchmark to mature the technology necessary to impact practical simulation workflows.

  15. Multipath search coding of stationary signals with applications to speech

    NASA Astrophysics Data System (ADS)

    Fehn, H. G.; Noll, P.

    1982-04-01

    This paper deals with the application of multipath search coding (MSC) concepts to the coding of stationary memoryless and correlated sources, and of speech signals, at a rate of one bit per sample. Use is made of three MSC classes: (1) codebook coding, or vector quantization, (2) tree coding, and (3) trellis coding. This paper explains the performances of these coders and compares them both with those of conventional coders and with rate-distortion bounds. The potentials of MSC coding strategies are demonstrated by illustrations. The paper reports also on results of MSC coding of speech, where both the strategy of adaptive quantization and of adaptive prediction were included in coder design.

  16. Research on pre-processing of QR Code

    NASA Astrophysics Data System (ADS)

    Sun, Haixing; Xia, Haojie; Dong, Ning

    2013-10-01

    QR code encodes many kinds of information because of its advantages: large storage capacity, high reliability, full arrange of utter-high-speed reading, small printing size and high-efficient representation of Chinese characters, etc. In order to obtain the clearer binarization image from complex background, and improve the recognition rate of QR code, this paper researches on pre-processing methods of QR code (Quick Response Code), and shows algorithms and results of image pre-processing for QR code recognition. Improve the conventional method by changing the Souvola's adaptive text recognition method. Additionally, introduce the QR code Extraction which adapts to different image size, flexible image correction approach, and improve the efficiency and accuracy of QR code image processing.

  17. Analysis of random point images with the use of symbolic computation codes and generalized Catalan numbers

    NASA Astrophysics Data System (ADS)

    Reznik, A. L.; Tuzikov, A. V.; Solov'ev, A. A.; Torgov, A. V.

    2016-11-01

    Original codes and combinatorial-geometrical computational schemes are presented, which are developed and applied for finding exact analytical formulas that describe the probability of errorless readout of random point images recorded by a scanning aperture with a limited number of threshold levels. Combinatorial problems encountered in the course of the study and associated with the new generalization of Catalan numbers are formulated and solved. An attempt is made to find the explicit analytical form of these numbers, which is, on the one hand, a necessary stage of solving the basic research problem and, on the other hand, an independent self-consistent problem.

  18. Advanced x-ray imaging spectrometer

    NASA Technical Reports Server (NTRS)

    Callas, John L. (Inventor); Soli, George A. (Inventor)

    1998-01-01

    An x-ray spectrometer that also provides images of an x-ray source. Coded aperture imaging techniques are used to provide high resolution images. Imaging position-sensitive x-ray sensors with good energy resolution are utilized to provide excellent spectroscopic performance. The system produces high resolution spectral images of the x-ray source which can be viewed in any one of a number of specific energy bands.

  19. SU-C-201-03: Coded Aperture Gamma-Ray Imaging Using Pixelated Semiconductor Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, S; Kaye, W; Jaworski, J

    2015-06-15

    Purpose: Improved localization of gamma-ray emissions from radiotracers is essential to the progress of nuclear medicine. Polaris is a portable, room-temperature operated gamma-ray imaging spectrometer composed of two 3×3 arrays of thick CdZnTe (CZT) detectors, which detect gammas between 30keV and 3MeV with energy resolution of <1% FWHM at 662keV. Compton imaging is used to map out source distributions in 4-pi space; however, is only effective above 300keV where Compton scatter is dominant. This work extends imaging to photoelectric energies (<300keV) using coded aperture imaging (CAI), which is essential for localization of Tc-99m (140keV). Methods: CAI, similar to the pinholemore » camera, relies on an attenuating mask, with open/closed elements, placed between the source and position-sensitive detectors. Partial attenuation of the source results in a “shadow” or count distribution that closely matches a portion of the mask pattern. Ideally, each source direction corresponds to a unique count distribution. Using backprojection reconstruction, the source direction is determined within the field of view. The knowledge of 3D position of interaction results in improved image quality. Results: Using a single array of detectors, a coded aperture mask, and multiple Co-57 (122keV) point sources, image reconstruction is performed in real-time, on an event-by-event basis, resulting in images with an angular resolution of ∼6 degrees. Although material nonuniformities contribute to image degradation, the superposition of images from individual detectors results in improved SNR. CAI was integrated with Compton imaging for a seamless transition between energy regimes. Conclusion: For the first time, CAI has been applied to thick, 3D position sensitive CZT detectors. Real-time, combined CAI and Compton imaging is performed using two 3×3 detector arrays, resulting in a source distribution in space. This system has been commercialized by H3D, Inc. and is being acquired for various applications worldwide, including proton therapy imaging R&D.« less

  20. Fractal Viscous Fingering in Fracture Networks

    NASA Astrophysics Data System (ADS)

    Boyle, E.; Sams, W.; Ferer, M.; Smith, D. H.

    2007-12-01

    We have used two very different physical models and computer codes to study miscible injection of a low- viscosity fluid into a simple fracture network, where it displaces a much-more viscous "defending" fluid through "rock" that is otherwise impermeable. The one code (NETfLow) is a standard pore level model, originally intended to treat laboratory-scale experiments; it assumes negligible mixing of the two fluids. The other code (NFFLOW) was written to treat reservoir-scale engineering problems; It explicitly treats the flow through the fractures and allows for significant mixing of the fluids at the interface. Both codes treat the fractures as parallel plates, of different effective apertures. Results are presented for the composition profiles from both codes. Independent of the degree of fluid-mixing, the profiles from both models have a functional form identical to that for fractal viscous fingering (i.e., diffusion limited aggregation, DLA). The two codes that solve the equations for different models gave similar results; together they suggest that the injection of a low-viscosity fluid into large- scale fracture networks may be much more significantly affected by fractal fingering than previously illustrated.

  1. Modulation transfer function of a fish-eye lens based on the sixth-order wave aberration theory.

    PubMed

    Jia, Han; Lu, Lijun; Cao, Yiqing

    2018-01-10

    A calculation program of the modulation transfer function (MTF) of a fish-eye lens is developed with the autocorrelation method, in which the sixth-order wave aberration theory of ultra-wide-angle optical systems is used to simulate the wave aberration distribution at the exit pupil of the optical systems. The autocorrelation integral is processed with the Gauss-Legendre integral, and the magnification chromatic aberration is discussed to calculate polychromatic MTF. The MTF calculation results of a given example are then compared with those previously obtained based on the fourth-order wave aberration theory of plane-symmetrical optical systems and with those from the Zemax program. The study shows that MTF based on the sixth-order wave aberration theory has satisfactory calculation accuracy even for a fish-eye lens with a large acceptance aperture. And the impacts of different types of aberrations on the MTF of a fish-eye lens are analyzed. Finally, we apply the self-adaptive and normalized real-coded genetic algorithm and the MTF developed in the paper to optimize the Nikon F/2.8 fish-eye lens; consequently, the optimized system shows better MTF performances than those of the original design.

  2. Incorporating spike-rate adaptation into a rate code in mathematical and biological neurons

    PubMed Central

    Ralston, Bridget N.; Flagg, Lucas Q.; Faggin, Eric

    2016-01-01

    For a slowly varying stimulus, the simplest relationship between a neuron's input and output is a rate code, in which the spike rate is a unique function of the stimulus at that instant. In the case of spike-rate adaptation, there is no unique relationship between input and output, because the spike rate at any time depends both on the instantaneous stimulus and on prior spiking (the “history”). To improve the decoding of spike trains produced by neurons that show spike-rate adaptation, we developed a simple scheme that incorporates “history” into a rate code. We utilized this rate-history code successfully to decode spike trains produced by 1) mathematical models of a neuron in which the mechanism for adaptation (IAHP) is specified, and 2) the gastropyloric receptor (GPR2), a stretch-sensitive neuron in the stomatogastric nervous system of the crab Cancer borealis, that exhibits long-lasting adaptation of unknown origin. Moreover, when we modified the spike rate either mathematically in a model system or by applying neuromodulatory agents to the experimental system, we found that changes in the rate-history code could be related to the biophysical mechanisms responsible for altering the spiking. PMID:26888106

  3. Speckle imaging through turbulent atmosphere based on adaptable pupil segmentation.

    PubMed

    Loktev, Mikhail; Soloviev, Oleg; Savenko, Svyatoslav; Vdovin, Gleb

    2011-07-15

    We report on the first results to our knowledge obtained with adaptable multiaperture imaging through turbulence on a horizontal atmospheric path. We show that the resolution can be improved by adaptively matching the size of the subaperture to the characteristic size of the turbulence. Further improvement is achieved by the deconvolution of a number of subimages registered simultaneously through multiple subapertures. Different implementations of multiaperture geometry, including pupil multiplication, pupil image sampling, and a plenoptic telescope, are considered. Resolution improvement has been demonstrated on a ∼550 m horizontal turbulent path, using a combination of aperture sampling, speckle image processing, and, optionally, frame selection. © 2011 Optical Society of America

  4. Interactive Finite Elements for General Engine Dynamics Analysis

    NASA Technical Reports Server (NTRS)

    Adams, M. L.; Padovan, J.; Fertis, D. G.

    1984-01-01

    General nonlinear finite element codes were adapted for the purpose of analyzing the dynamics of gas turbine engines. In particular, this adaptation required the development of a squeeze-film damper element software package and its implantation into a representative current generation code. The ADINA code was selected because of prior use of it and familiarity with its internal structure and logic. This objective was met and the results indicate that such use of general purpose codes is viable alternative to specialized codes for general dynamics analysis of engines.

  5. A User's Guide to AMR1D: An Instructional Adaptive Mesh Refinement Code for Unstructured Grids

    NASA Technical Reports Server (NTRS)

    deFainchtein, Rosalinda

    1996-01-01

    This report documents the code AMR1D, which is currently posted on the World Wide Web (http://sdcd.gsfc.nasa.gov/ESS/exchange/contrib/de-fainchtein/adaptive _mesh_refinement.html). AMR1D is a one-dimensional finite element fluid-dynamics solver, capable of adaptive mesh refinement (AMR). It was written as an instructional tool for AMR on unstructured mesh codes. It is meant to illustrate the minimum requirements for AMR on more than one dimension. For that purpose, it uses the same type of data structure that would be necessary on a two-dimensional AMR code (loosely following the algorithm described by Lohner).

  6. Wall-interference assessment and corrections for transonic NACA 0012 airfoil data from various wind tunnels. M.S. Thesis - George Washington Univ., 1988

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Newman, Perry A.

    1991-01-01

    A nonlinear, four wall, post-test wall interference assessment/correction (WIAC) code was developed for transonic airfoil data from solid wall wind tunnels with flexibly adaptable top and bottom walls. The WIAC code was applied over a broad range of test conditions to four sets of NACA 0012 airfoil data, from two different adaptive wall wind tunnels. The data include many test points for fully adapted walls, as well as numerous partially adapted and unadapted test points, which together represent many different model/tunnel configurations and possible wall interference effects. Small corrections to the measured Mach numbers and angles of attack were obtained from the WIAC code even for fully adapted data; these corrections generally improve the correlation among the various sets of airfoil data and simultaneously improve the correlation of the data with calculations for a 2-D, free air, Navier-Stokes code. The WIAC corrections for airfoil data taken in fully adapted wall test sections are shown to be significantly smaller than those for comparable airfoil data from straight, slotted wall test sections. This indicates, as expected, a lesser degree of wall interference in the adapted wall tunnels relative to the slotted wall tunnels. Application of the WIAC code to this data was, however, somewhat more difficult and time consuming than initially expected from similar previous experience with WIAC applications to slotted wall data.

  7. Advanced designs for non-imaging submillimeter-wave Winston cone concentrators

    NASA Astrophysics Data System (ADS)

    Nelson, A. O.; Grossman, E. N.

    2014-05-01

    We describe the design and simulation of several non-imaging concentrators designed to couple submillimeter wavelength radiation from free space into highly overmoded, rectangular, WR-10 waveguide. Previous designs are altered to improve the uniformity of efficiency rather than the efficiency itself. The concentrators are intended for use as adapters between instruments using overmoded WR-10 waveguide as input or output and sources propagating through free space. Previous simulation and measurement have shown that the angular response is primarily determined by the Winston cone and is well predicted by geometric optics theory while the efficiencies are primarily determined by the transition section. Additionally, previous work has shown insensitivity to polarization, orientation and beam size. Several separate concentrator designs are studied, all of which use a Winston cone (also known as a compound parabolic concentrator) with an input diameter ranging from 4 mm to 16 mm, and "throat" diameters of less than 0.5 mm to 4 mm as the initial interface. The use of various length adiabatic circular-to-rectangular transition sections is investigated, along with the effect of an additional, 25 mm waveguide section designed to model the internal waveguide of the power meter. Adapters without a transition section and a rectangular Winston cone throat aperture and double cone configurations are also studied. Adapters are analyzed in simulation for consistent efficiency across the opening aperture.

  8. Open-source framework for documentation of scientific software written on MATLAB-compatible programming languages

    NASA Astrophysics Data System (ADS)

    Konnik, Mikhail V.; Welsh, James

    2012-09-01

    Numerical simulators for adaptive optics systems have become an essential tool for the research and development of the future advanced astronomical instruments. However, growing software code of the numerical simulator makes it difficult to continue to support the code itself. The problem of adequate documentation of the astronomical software for adaptive optics simulators may complicate the development since the documentation must contain up-to-date schemes and mathematical descriptions implemented in the software code. Although most modern programming environments like MATLAB or Octave have in-built documentation abilities, they are often insufficient for the description of a typical adaptive optics simulator code. This paper describes a general cross-platform framework for the documentation of scientific software using open-source tools such as LATEX, mercurial, Doxygen, and Perl. Using the Perl script that translates M-files MATLAB comments into C-like, one can use Doxygen to generate and update the documentation for the scientific source code. The documentation generated by this framework contains the current code description with mathematical formulas, images, and bibliographical references. A detailed description of the framework components is presented as well as the guidelines for the framework deployment. Examples of the code documentation for the scripts and functions of a MATLAB-based adaptive optics simulator are provided.

  9. Complexity control algorithm based on adaptive mode selection for interframe coding in high efficiency video coding

    NASA Astrophysics Data System (ADS)

    Chen, Gang; Yang, Bing; Zhang, Xiaoyun; Gao, Zhiyong

    2017-07-01

    The latest high efficiency video coding (HEVC) standard significantly increases the encoding complexity for improving its coding efficiency. Due to the limited computational capability of handheld devices, complexity constrained video coding has drawn great attention in recent years. A complexity control algorithm based on adaptive mode selection is proposed for interframe coding in HEVC. Considering the direct proportionality between encoding time and computational complexity, the computational complexity is measured in terms of encoding time. First, complexity is mapped to a target in terms of prediction modes. Then, an adaptive mode selection algorithm is proposed for the mode decision process. Specifically, the optimal mode combination scheme that is chosen through offline statistics is developed at low complexity. If the complexity budget has not been used up, an adaptive mode sorting method is employed to further improve coding efficiency. The experimental results show that the proposed algorithm achieves a very large complexity control range (as low as 10%) for the HEVC encoder while maintaining good rate-distortion performance. For the lowdelayP condition, compared with the direct resource allocation method and the state-of-the-art method, an average gain of 0.63 and 0.17 dB in BDPSNR is observed for 18 sequences when the target complexity is around 40%.

  10. Fuzzy support vector machines for adaptive Morse code recognition.

    PubMed

    Yang, Cheng-Hong; Jin, Li-Cheng; Chuang, Li-Yeh

    2006-11-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, facilitating mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. Therefore, an adaptive automatic recognition method with a high recognition rate is needed. The proposed system uses both fuzzy support vector machines and the variable-degree variable-step-size least-mean-square algorithm to achieve these objectives. We apply fuzzy memberships to each point, and provide different contributions to the decision learning function for support vector machines. Statistical analyses demonstrated that the proposed method elicited a higher recognition rate than other algorithms in the literature.

  11. Nine-year-old children use norm-based coding to visually represent facial expression.

    PubMed

    Burton, Nichola; Jeffery, Linda; Skinner, Andrew L; Benton, Christopher P; Rhodes, Gillian

    2013-10-01

    Children are less skilled than adults at making judgments about facial expression. This could be because they have not yet developed adult-like mechanisms for visually representing faces. Adults are thought to represent faces in a multidimensional face-space, and have been shown to code the expression of a face relative to the norm or average face in face-space. Norm-based coding is economical and adaptive, and may be what makes adults more sensitive to facial expression than children. This study investigated the coding system that children use to represent facial expression. An adaptation aftereffect paradigm was used to test 24 adults and 18 children (9 years 2 months to 9 years 11 months old). Participants adapted to weak and strong antiexpressions. They then judged the expression of an average expression. Adaptation created aftereffects that made the test face look like the expression opposite that of the adaptor. Consistent with the predictions of norm-based but not exemplar-based coding, aftereffects were larger for strong than weak adaptors for both age groups. Results indicate that, like adults, children's coding of facial expressions is norm-based. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. Adaptive image coding based on cubic-spline interpolation

    NASA Astrophysics Data System (ADS)

    Jiang, Jian-Xing; Hong, Shao-Hua; Lin, Tsung-Ching; Wang, Lin; Truong, Trieu-Kien

    2014-09-01

    It has been investigated that at low bit rates, downsampling prior to coding and upsampling after decoding can achieve better compression performance than standard coding algorithms, e.g., JPEG and H. 264/AVC. However, at high bit rates, the sampling-based schemes generate more distortion. Additionally, the maximum bit rate for the sampling-based scheme to outperform the standard algorithm is image-dependent. In this paper, a practical adaptive image coding algorithm based on the cubic-spline interpolation (CSI) is proposed. This proposed algorithm adaptively selects the image coding method from CSI-based modified JPEG and standard JPEG under a given target bit rate utilizing the so called ρ-domain analysis. The experimental results indicate that compared with the standard JPEG, the proposed algorithm can show better performance at low bit rates and maintain the same performance at high bit rates.

  13. Joint aperture detection for speckle reduction and increased collection efficiency in ophthalmic MHz OCT

    PubMed Central

    Klein, Thomas; André, Raphael; Wieser, Wolfgang; Pfeiffer, Tom; Huber, Robert

    2013-01-01

    Joint-aperture optical coherence tomography (JA-OCT) is an angle-resolved OCT method, in which illumination from an active channel is simultaneously probed by several passive channels. JA-OCT increases the collection efficiency and effective sensitivity of the OCT system without increasing the power on the sample. Additionally, JA-OCT provides angular scattering information about the sample in a single acquisition, so the OCT imaging speed is not reduced. Thus, JA-OCT is especially suitable for ultra high speed in-vivo imaging. JA-OCT is compared to other angle-resolved techniques, and the relation between joint aperture imaging, adaptive optics, coherent and incoherent compounding is discussed. We present angle-resolved imaging of the human retina at an axial scan rate of 1.68 MHz, and demonstrate the benefits of JA-OCT: Speckle reduction, signal increase and suppression of specular and parasitic reflections. Moreover, in the future JA-OCT may allow for the reconstruction of the full Doppler vector and tissue discrimination by analysis of the angular scattering dependence. PMID:23577296

  14. Interferometric inverse synthetic aperture radar imaging for space targets based on wideband direct sampling using two antennas

    NASA Astrophysics Data System (ADS)

    Tian, Biao; Liu, Yang; Xu, Shiyou; Chen, Zengping

    2014-01-01

    Interferometric inverse synthetic aperture radar (InISAR) imaging provides complementary information to monostatic inverse synthetic aperture radar (ISAR) imaging. This paper proposes a new InISAR imaging system for space targets based on wideband direct sampling using two antennas. The system is easy to realize in engineering since the motion trajectory of space targets can be known in advance, which is simpler than that of three receivers. In the preprocessing step, high speed movement compensation is carried out by designing an adaptive matched filter containing speed that is obtained from the narrow band information. Then, the coherent processing and keystone transform for ISAR imaging are adopted to reserve the phase history of each antenna. Through appropriate collocation of the system, image registration and phase unwrapping can be avoided. Considering the situation not to be satisfied, the influence of baseline variance is analyzed and compensation method is adopted. The corresponding size can be achieved by interferometric processing of the two complex ISAR images. Experimental results prove the validity of the analysis and the three-dimensional imaging algorithm.

  15. Enabling the detection of UV signal in multimodal nonlinear microscopy with catalogue lens components.

    PubMed

    Vogel, Martin; Wingert, Axel; Fink, Rainer H A; Hagl, Christian; Ganikhanov, Feruz; Pfeffer, Christian P

    2015-10-01

    Using an optical system made from fused silica catalogue optical components, third-order nonlinear microscopy has been enabled on conventional Ti:sapphire laser-based multiphoton microscopy setups. The optical system is designed using two lens groups with straightforward adaptation to other microscope stands when one of the lens groups is exchanged. Within the theoretical design, the optical system collects and transmits light with wavelengths between the near ultraviolet and the near infrared from an object field of at least 1 mm in diameter within a resulting numerical aperture of up to 0.56. The numerical aperture can be controlled with a variable aperture stop between the two lens groups of the condenser. We demonstrate this new detection capability in third harmonic generation imaging experiments at the harmonic wavelength of ∼300 nm and in multimodal nonlinear optical imaging experiments using third-order sum frequency generation and coherent anti-Stokes Raman scattering microscopy so that the wavelengths of the detected signals range from ∼300 nm to ∼660 nm. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  16. Beamforming using subspace estimation from a diagonally averaged sample covariance.

    PubMed

    Quijano, Jorge E; Zurk, Lisa M

    2017-08-01

    The potential benefit of a large-aperture sonar array for high resolution target localization is often challenged by the lack of sufficient data required for adaptive beamforming. This paper introduces a Toeplitz-constrained estimator of the clairvoyant signal covariance matrix corresponding to multiple far-field targets embedded in background isotropic noise. The estimator is obtained by averaging along subdiagonals of the sample covariance matrix, followed by covariance extrapolation using the method of maximum entropy. The sample covariance is computed from limited data snapshots, a situation commonly encountered with large-aperture arrays in environments characterized by short periods of local stationarity. Eigenvectors computed from the Toeplitz-constrained covariance are used to construct signal-subspace projector matrices, which are shown to reduce background noise and improve detection of closely spaced targets when applied to subspace beamforming. Monte Carlo simulations corresponding to increasing array aperture suggest convergence of the proposed projector to the clairvoyant signal projector, thereby outperforming the classic projector obtained from the sample eigenvectors. Beamforming performance of the proposed method is analyzed using simulated data, as well as experimental data from the Shallow Water Array Performance experiment.

  17. Tracking in a ground-to-satellite optical link: effects due to lead-ahead and aperture mismatch, including temporal tracking response.

    PubMed

    Basu, Santasri; Voelz, David

    2008-07-01

    Establishing a link between a ground station and a geosynchronous orbiting satellite can be aided greatly with the use of a beacon on the satellite. A tracker, or even an adaptive optics system, can use the beacon during communication or tracking activities to correct beam pointing for atmospheric turbulence and mount jitter effects. However, the pointing lead-ahead required to illuminate the moving object and an aperture mismatch between the tracking and the pointing apertures can limit the effectiveness of the correction, as the sensed tilt will not be the same as the tilt required for optimal transmission to the satellite. We have developed an analytical model that addresses the combined impact of these tracking issues in a ground-to-satellite optical link. We present these results for different tracker/pointer configurations. By setting the low-pass cutoff frequency of the tracking servo properly, the tracking errors can be minimized. The analysis considers geosynchronous Earth orbit satellites as well as low Earth orbit satellites.

  18. Adaptive Nodal Transport Methods for Reactor Transient Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas Downar; E. Lewis

    2005-08-31

    Develop methods for adaptively treating the angular, spatial, and time dependence of the neutron flux in reactor transient analysis. These methods were demonstrated in the DOE transport nodal code VARIANT and the US NRC spatial kinetics code, PARCS.

  19. Detection and Classification of Objects in Synthetic Aperture Radar Imagery

    DTIC Science & Technology

    2006-02-01

    a higher False Alarm Rate (FAR). Currently, a standard edge detector is the Canny algorithm, which is available with the mathematics package MATLAB ...the algorithm used to calculate the Radon transform. The MATLAB implementation uses the built in Radon transform procedure, which is extremely... MATLAB code for a faster forward-backwards selection process has also been provided. In both cases, the feature selection was accomplished by using

  20. Volumetric Real-Time Imaging Using a CMUT Ring Array

    PubMed Central

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods—flash, classic phased array (CPA), and synthetic phased array (SPA)—were used in the study. For SPA imaging, two techniques to improve the image quality—Hadamard coding and aperture weighting—were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming. PMID:22718870

  1. Volumetric real-time imaging using a CMUT ring array.

    PubMed

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N; O'Donnell, Matthew; Sahn, David J; Khuri-Yakub, Butrus T

    2012-06-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods--flash, classic phased array (CPA), and synthetic phased array (SPA)--were used in the study. For SPA imaging, two techniques to improve the image quality--Hadamard coding and aperture weighting--were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming.

  2. Deficits in context-dependent adaptive coding of reward in schizophrenia

    PubMed Central

    Kirschner, Matthias; Hager, Oliver M; Bischof, Martin; Hartmann-Riemer, Matthias N; Kluge, Agne; Seifritz, Erich; Tobler, Philippe N; Kaiser, Stefan

    2016-01-01

    Theoretical principles of information processing and empirical findings suggest that to efficiently represent all possible rewards in the natural environment, reward-sensitive neurons have to adapt their coding range dynamically to the current reward context. Adaptation ensures that the reward system is most sensitive for the most likely rewards, enabling the system to efficiently represent a potentially infinite range of reward information. A deficit in neural adaptation would prevent precise representation of rewards and could have detrimental effects for an organism’s ability to optimally engage with its environment. In schizophrenia, reward processing is known to be impaired and has been linked to different symptom dimensions. However, despite the fundamental significance of coding reward adaptively, no study has elucidated whether adaptive reward processing is impaired in schizophrenia. We therefore studied patients with schizophrenia (n=27) and healthy controls (n=25), using functional magnetic resonance imaging in combination with a variant of the monetary incentive delay task. Compared with healthy controls, patients with schizophrenia showed less efficient neural adaptation to the current reward context, which leads to imprecise neural representation of reward. Importantly, the deficit correlated with total symptom severity. Our results suggest that some of the deficits in reward processing in schizophrenia might be due to inefficient neural adaptation to the current reward context. Furthermore, because adaptive coding is a ubiquitous feature of the brain, we believe that our findings provide an avenue in defining a general impairment in neural information processing underlying this debilitating disorder. PMID:27430009

  3. Improvement of spectral and axial resolutions in modified coded aperture correlation holography (COACH) imaging system

    NASA Astrophysics Data System (ADS)

    Vijayakumar, A.; Rosen, Joseph

    2017-05-01

    Coded aperture correlation holography (COACH) is a recently developed incoherent digital holographic technique. In COACH, two holograms are recorded: the object hologram for the object under study and another hologram for a point object called PSF hologram. The holograms are recorded by interfering two beams, both diffracted from the same object point, but only one of them passes through a random-like coded phase mask (CPM). The same CPM is used for recording the object as well as the PSF holograms. The image is reconstructed by correlating the object hologram with a processed version of the PSF hologram. The COACH holographic technique exhibits the same transverse and axial resolution of the regular imaging, but with the unique capability of storing 3D information. The basic COACH configuration consists of a single spatial light modulator (SLM) used for displaying the CPM. In this study, the basic COACH configuration has been advanced by employing two spatial light modulators (SLMs) in the setup. The refractive lens used in the basic COACH setup for collecting and collimating the light diffracted by the object is replaced by an SLM on which an equivalent diffractive lens is displayed. Unlike a refractive lens, the diffractive lens displayed on the first SLM focuses light with different wavelengths to different axial planes, which are separated by distances larger than the axial correlation lengths of the CPM for any visible wavelength. This characteristic extends the boundaries of COACH from three-dimensional to four-dimensional imaging with the wavelength as its fourth dimension.

  4. An FPGA design of generalized low-density parity-check codes for rate-adaptive optical transport networks

    NASA Astrophysics Data System (ADS)

    Zou, Ding; Djordjevic, Ivan B.

    2016-02-01

    Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.

  5. Dopamine Modulates Adaptive Prediction Error Coding in the Human Midbrain and Striatum.

    PubMed

    Diederen, Kelly M J; Ziauddeen, Hisham; Vestergaard, Martin D; Spencer, Tom; Schultz, Wolfram; Fletcher, Paul C

    2017-02-15

    Learning to optimally predict rewards requires agents to account for fluctuations in reward value. Recent work suggests that individuals can efficiently learn about variable rewards through adaptation of the learning rate, and coding of prediction errors relative to reward variability. Such adaptive coding has been linked to midbrain dopamine neurons in nonhuman primates, and evidence in support for a similar role of the dopaminergic system in humans is emerging from fMRI data. Here, we sought to investigate the effect of dopaminergic perturbations on adaptive prediction error coding in humans, using a between-subject, placebo-controlled pharmacological fMRI study with a dopaminergic agonist (bromocriptine) and antagonist (sulpiride). Participants performed a previously validated task in which they predicted the magnitude of upcoming rewards drawn from distributions with varying SDs. After each prediction, participants received a reward, yielding trial-by-trial prediction errors. Under placebo, we replicated previous observations of adaptive coding in the midbrain and ventral striatum. Treatment with sulpiride attenuated adaptive coding in both midbrain and ventral striatum, and was associated with a decrease in performance, whereas bromocriptine did not have a significant impact. Although we observed no differential effect of SD on performance between the groups, computational modeling suggested decreased behavioral adaptation in the sulpiride group. These results suggest that normal dopaminergic function is critical for adaptive prediction error coding, a key property of the brain thought to facilitate efficient learning in variable environments. Crucially, these results also offer potential insights for understanding the impact of disrupted dopamine function in mental illness. SIGNIFICANCE STATEMENT To choose optimally, we have to learn what to expect. Humans dampen learning when there is a great deal of variability in reward outcome, and two brain regions that are modulated by the brain chemical dopamine are sensitive to reward variability. Here, we aimed to directly relate dopamine to learning about variable rewards, and the neural encoding of associated teaching signals. We perturbed dopamine in healthy individuals using dopaminergic medication and asked them to predict variable rewards while we made brain scans. Dopamine perturbations impaired learning and the neural encoding of reward variability, thus establishing a direct link between dopamine and adaptation to reward variability. These results aid our understanding of clinical conditions associated with dopaminergic dysfunction, such as psychosis. Copyright © 2017 Diederen et al.

  6. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    PubMed

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.

  7. Multigrid preconditioned conjugate-gradient method for large-scale wave-front reconstruction.

    PubMed

    Gilles, Luc; Vogel, Curtis R; Ellerbroek, Brent L

    2002-09-01

    We introduce a multigrid preconditioned conjugate-gradient (MGCG) iterative scheme for computing open-loop wave-front reconstructors for extreme adaptive optics systems. We present numerical simulations for a 17-m class telescope with n = 48756 sensor measurement grid points within the aperture, which indicate that our MGCG method has a rapid convergence rate for a wide range of subaperture average slope measurement signal-to-noise ratios. The total computational cost is of order n log n. Hence our scheme provides for fast wave-front simulation and control in large-scale adaptive optics systems.

  8. An Adaptive Cross-Correlation Algorithm for Extended-Scene Shack-Hartmann Wavefront Sensing

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin; Green, Joseph J.; Ohara, Catherine M.; Redding, David C.

    2007-01-01

    This viewgraph presentation reviews the Adaptive Cross-Correlation (ACC) Algorithm for extended scene-Shack Hartmann wavefront (WF) sensing. A Shack-Hartmann sensor places a lenslet array at a plane conjugate to the WF error source. Each sub-aperture lenslet samples the WF in the corresponding patch of the WF. A description of the ACC algorithm is included. The ACC has several benefits; amongst them are: ACC requires only about 4 image-shifting iterations to achieve 0.01 pixel accuracy and ACC is insensitive to both background light and noise much more robust than centroiding,

  9. Development of adaptive liquid microlenses and microlens arrays

    NASA Astrophysics Data System (ADS)

    Berry, Shaun R.; Stewart, Jason B.; Thorsen, Todd A.; Guha, Ingrid

    2013-03-01

    We report on the development of sub-millimeter size adaptive liquid microlenses and microlens arrays using two immiscible liquids to form individual lenses. Microlenses and microlens arrays having aperture diameters as small as 50 microns were fabricated on a planar quartz substrate using patterned hydrophobic/hydrophilic regions. Liquid lenses were formed by a self-assembled oil dosing process that created well-defined lenses having a high fill factor. Variable focus was achieved by controlling the lens curvature through electrowetting. Greater than 70° of contact angle change was achieved with less than 20 volts, which results in a large optical power dynamic range.

  10. Corneal seal device

    NASA Technical Reports Server (NTRS)

    Baehr, E. F. (Inventor)

    1977-01-01

    A corneal seal device is provided which, when placed in an incision in the eye, permits the insertion of a surgical tool or instrument through the device into the eye. The device includes a seal chamber which opens into a tube which is adapted to be sutured to the eye and serves as an entry passage for a tool. A sealable aperture in the chamber permits passage of the tool through the chamber into the tube and hence into the eye. The chamber includes inlet ports adapted to be connected to a regulated source of irrigation fluid which provides a safe intraocular pressure.

  11. Pupil-segmentation-based adaptive optical correction of a high-numerical-aperture gradient refractive index lens for two-photon fluorescence endoscopy.

    PubMed

    Wang, Chen; Ji, Na

    2012-06-01

    The intrinsic aberrations of high-NA gradient refractive index (GRIN) lenses limit their image quality as well as field of view. Here we used a pupil-segmentation-based adaptive optical approach to correct the inherent aberrations in a two-photon fluorescence endoscope utilizing a 0.8 NA GRIN lens. By correcting the field-dependent aberrations, we recovered diffraction-limited performance across a large imaging field. The consequent improvements in imaging signal and resolution allowed us to detect fine structures that were otherwise invisible inside mouse brain slices.

  12. A Spanish version for the new ERA-EDTA coding system for primary renal disease.

    PubMed

    Zurriaga, Óscar; López-Briones, Carmen; Martín Escobar, Eduardo; Saracho-Rotaeche, Ramón; Moina Eguren, Íñigo; Pallardó Mateu, Luis; Abad Díez, José María; Sánchez Miret, José Ignacio

    2015-01-01

    The European Renal Association and the European Dialysis and Transplant Association (ERA-EDTA) have issued an English-language new coding system for primary kidney disease (PKD) aimed at solving the problems that were identified in the list of "Primary renal diagnoses" that has been in use for over 40 years. In the context of Registro Español de Enfermos Renales (Spanish Registry of Renal Patients, [REER]), the need for a translation and adaptation of terms, definitions and notes for the new ERA-EDTA codes was perceived in order to help those who have Spanish as their working language when using such codes. Bilingual nephrologists contributed a professional translation and were involved in a terminological adaptation process, which included a number of phases to contrast translation outputs. Codes, paragraphs, definitions and diagnostic criteria were reviewed and agreements and disagreements aroused for each term were labelled. Finally, the version that was accepted by a majority of reviewers was agreed. A wide agreement was reached in the first review phase, with only 5 points of discrepancy remaining, which were agreed on in the final phase. Translation and adaptation into Spanish represent an improvement that will help to introduce and use the new coding system for PKD, as it can help reducing the time devoted to coding and also the period of adaptation of health workers to the new codes. Copyright © 2015 The Authors. Published by Elsevier España, S.L.U. All rights reserved.

  13. 5.625 Gbps bidirectional laser communications measurements between the NFIRE satellite and an optical ground station

    NASA Astrophysics Data System (ADS)

    Fields, Renny A.; Kozlowski, David A.; Yura, Harold T.; Wong, Robert L.; Wicker, Josef M.; Lunde, Carl T.; Gregory, Mark; Wandernoth, Bernhard K.; Heine, Frank F.; Luna, Joseph J.

    2011-11-01

    5.625 Gbps bidirectional laser communication at 1064 nm has been demonstrated on a repeatable basis between a Tesat coherent laser communication terminal with a 6.5 cm diameter ground aperture mounted inside the European Space Agency Optical Ground Station dome at Izana, Tenerife and a similar space-based terminal (12.4 cm diameter aperture) on the Near-Field InfraRed Experiment (NFIRE) low-earth-orbiting spacecraft. Both night and day bidirectional links were demonstrated with the longest being 177 seconds in duration. Correlation with atmospheric models and preliminary atmospheric r0 and scintillation measurements have been made for the conditions tested, suggesting that such coherent systems can be deployed successfully at still lower altitudes without resorting to the use of adaptive optics for compensation.

  14. Reducing the Requirements and Cost of Astronomical Telescopes

    NASA Technical Reports Server (NTRS)

    Smith, W. Scott; Whitakter, Ann F. (Technical Monitor)

    2002-01-01

    Limits on astronomical telescope apertures are being rapidly approached. These limits result from logistics, increasing complexity, and finally budgetary constraints. In an historical perspective, great strides have been made in the area of aperture, adaptive optics, wavefront sensors, detectors, stellar interferometers and image reconstruction. What will be the next advances? Emerging data analysis techniques based on communication theory holds the promise of yielding more information from observational data based on significant computer post-processing. This paper explores some of the current telescope limitations and ponders the possibilities increasing the yield of scientific data based on the migration computer post-processing techniques to higher dimensions. Some of these processes hold the promise of reducing the requirements on the basic telescope hardware making the next generation of instruments more affordable.

  15. Capacity achieving nonbinary LDPC coded non-uniform shaping modulation for adaptive optical communications.

    PubMed

    Lin, Changyu; Zou, Ding; Liu, Tao; Djordjevic, Ivan B

    2016-08-08

    A mutual information inspired nonbinary coded modulation design with non-uniform shaping is proposed. Instead of traditional power of two signal constellation sizes, we design 5-QAM, 7-QAM and 9-QAM constellations, which can be used in adaptive optical networks. The non-uniform shaping and LDPC code rate are jointly considered in the design, which results in a better performance scheme for the same SNR values. The matched nonbinary (NB) LDPC code is used for this scheme, which further improves the coding gain and the overall performance. We analyze both coding performance and system SNR performance. We show that the proposed NB LDPC-coded 9-QAM has more than 2dB gain in symbol SNR compared to traditional LDPC-coded star-8-QAM. On the other hand, the proposed NB LDPC-coded 5-QAM and 7-QAM have even better performance than LDPC-coded QPSK.

  16. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses

    PubMed Central

    Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram

    2016-01-01

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result in loss of precise representation (e.g., when the avoidance of a loss in a loss-context is coded the same as receipt of a gain in a gain-context). We investigated an intermediate form of adaptation that is efficient while maintaining information about received gains and avoided losses. We found that frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Importantly, adaptation was intermediate, in line with influential models of reference dependence in behavioral economics. PMID:27683899

  17. Cross-Layer Design for Video Transmission over Wireless Rician Slow-Fading Channels Using an Adaptive Multiresolution Modulation and Coding Scheme

    NASA Astrophysics Data System (ADS)

    Pei, Yong; Modestino, James W.

    2007-12-01

    We describe a multilayered video transport scheme for wireless channels capable of adapting to channel conditions in order to maximize end-to-end quality of service (QoS). This scheme combines a scalable H.263+ video source coder with unequal error protection (UEP) across layers. The UEP is achieved by employing different channel codes together with a multiresolution modulation approach to transport the different priority layers. Adaptivity to channel conditions is provided through a joint source-channel coding (JSCC) approach which attempts to jointly optimize the source and channel coding rates together with the modulation parameters to obtain the maximum achievable end-to-end QoS for the prevailing channel conditions. In this work, we model the wireless links as slow-fading Rician channel where the channel conditions can be described in terms of the channel signal-to-noise ratio (SNR) and the ratio of specular-to-diffuse energy[InlineEquation not available: see fulltext.]. The multiresolution modulation/coding scheme consists of binary rate-compatible punctured convolutional (RCPC) codes used together with nonuniform phase-shift keyed (PSK) signaling constellations. Results indicate that this adaptive JSCC scheme employing scalable video encoding together with a multiresolution modulation/coding approach leads to significant improvements in delivered video quality for specified channel conditions. In particular, the approach results in considerably improved graceful degradation properties for decreasing channel SNR.

  18. Digitally Controlled Slot Coupled Patch Array

    NASA Technical Reports Server (NTRS)

    D'Arista, Thomas; Pauly, Jerry

    2010-01-01

    A four-element array conformed to a singly curved conducting surface has been demonstrated to provide 2 dB axial ratio of 14 percent, while maintaining VSWR (voltage standing wave ratio) of 2:1 and gain of 13 dBiC. The array is digitally controlled and can be scanned with the LMS Adaptive Algorithm using the power spectrum as the objective, as well as the Direction of Arrival (DoA) of the beam to set the amplitude of the power spectrum. The total height of the array above the conducting surface is 1.5 inches (3.8 cm). A uniquely configured microstrip-coupled aperture over a conducting surface produced supergain characteristics, achieving 12.5 dBiC across the 2-to-2.13- GHz and 2.2-to-2.3-GHz frequency bands. This design is optimized to retain VSWR and axial ratio across the band as well. The four elements are uniquely configured with respect to one another for performance enhancement, and the appropriate phase excitation to each element for scan can be found either by analytical beam synthesis using the genetic algorithm with the measured or simulated far field radiation pattern, or an adaptive algorithm implemented with the digitized signal. The commercially available tuners and field-programmable gate array (FPGA) boards utilized required precise phase coherent configuration control, and with custom code developed by Nokomis, Inc., were shown to be fully functional in a two-channel configuration controlled by FPGA boards. A four-channel tuner configuration and oscilloscope configuration were also demonstrated although algorithm post-processing was required.

  19. Extended sources near-field processing of experimental aperture synthesis data and application of the Gerchberg method for enhancing radiometric three-dimensional millimetre-wave images in security screening portals

    NASA Astrophysics Data System (ADS)

    Salmon, Neil A.

    2017-10-01

    Aperture synthesis for passive millimetre wave imaging provides a means to screen people for concealed threats in the extreme near-field configuration of a portal, a regime where the imager to subject distance is of the order of both the required depth-of-field and the field-of-view. Due to optical aberrations, focal plane array imagers cannot deliver the large depth-of-fields and field-of-views required in this regime. Active sensors on the other hand can deliver these but face challenges of illumination, speckle and multi-path issues when imaging canyon regions of the body. Fortunately an aperture synthesis passive millimetre wave imaging system can deliver large depth-of-fields and field-of-views, whilst having no speckle effects, as the radiometric emission from the human body is spatially incoherent. Furthermore, as in portal security screening scenarios the aperture synthesis imaging technique delivers a half-wavelength spatial resolution, it can effectively screen the whole of the human body. Some recent measurements are presented that demonstrate the three-dimensional imaging capability of extended sources using a 22 GHz aperture synthesis system. A comparison is made between imagery generated via the analytic Fourier transform and a gridding fast Fourier transform method. The analytic Fourier transform enables aliasing in the imagery to be more clearly identified. Some initial results are also presented of how the Gerchberg technique, an image enhancement algorithm used in radio astronomy, is adapted for three-dimensional imaging in security screening. This technique is shown to be able to improve the quality of imagery, without adding extra receivers to the imager. The requirements of a walk through security screening system for use at entrances to airport departure lounges are discussed, concluding that these can be met by an aperture synthesis imager.

  20. Advanced technology development for image gathering, coding, and processing

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1990-01-01

    Three overlapping areas of research activities are presented: (1) Information theory and optimal filtering are extended to visual information acquisition and processing. The goal is to provide a comprehensive methodology for quantitatively assessing the end-to-end performance of image gathering, coding, and processing. (2) Focal-plane processing techniques and technology are developed to combine effectively image gathering with coding. The emphasis is on low-level vision processing akin to the retinal processing in human vision. (3) A breadboard adaptive image-coding system is being assembled. This system will be used to develop and evaluate a number of advanced image-coding technologies and techniques as well as research the concept of adaptive image coding.

  1. The multidimensional Self-Adaptive Grid code, SAGE, version 2

    NASA Technical Reports Server (NTRS)

    Davies, Carol B.; Venkatapathy, Ethiraj

    1995-01-01

    This new report on Version 2 of the SAGE code includes all the information in the original publication plus all upgrades and changes to the SAGE code since that time. The two most significant upgrades are the inclusion of a finite-volume option and the ability to adapt and manipulate zonal-matching multiple-grid files. In addition, the original SAGE code has been upgraded to Version 1.1 and includes all options mentioned in this report, with the exception of the multiple grid option and its associated features. Since Version 2 is a larger and more complex code, it is suggested (but not required) that Version 1.1 be used for single-grid applications. This document contains all the information required to run both versions of SAGE. The formulation of the adaption method is described in the first section of this document. The second section is presented in the form of a user guide that explains the input and execution of the code. The third section provides many examples. Successful application of the SAGE code in both two and three dimensions for the solution of various flow problems has proven the code to be robust, portable, and simple to use. Although the basic formulation follows the method of Nakahashi and Deiwert, many modifications have been made to facilitate the use of the self-adaptive grid method for complex grid structures. Modifications to the method and the simple but extensive input options make this a flexible and user-friendly code. The SAGE code can accommodate two-dimensional and three-dimensional, finite-difference and finite-volume, single grid, and zonal-matching multiple grid flow problems.

  2. Assimilating Thor: How Airmen Integrate Weather Prediction

    DTIC Science & Technology

    2010-06-01

    atmosphere and the earth from the air and from space widened the aperture of data so as to overexpose humans to the panoply of information coming...endurance record flights circled the earth without stopping; aircraft climbed through the atmosphere into space. Weather surveillance radar...advances found congruence in the meteorological advance of ensemble weather modeling. Complex, adaptive systems like the atmosphere lend themselves to

  3. Applications of Adaptive Learning Controller to Synthetic Aperture Radar.

    DTIC Science & Technology

    1985-02-01

    FIGURE 37. Location of Two Sub- Phase Histories to be Utilized in Estimating Misfocus Coefficients A and C . . . A8 FIGURES 38.-94. ALC Learning Curves ...FIGURES (Concl uded) FIGURE 23. ALC Learning Curve .... .................. ... 45 .- " FIGURE 24. ALC Learning Curve ......... ................. 47 FIGURE...25. ALC Learning Curve .... .................. ... 48 FIGURE 26. ALC Learning Curve ....... .... ... .... 50 FIGURE 27. ALC Learning Curve

  4. Evanescent Waves in High Numerical Aperture Aplanatic Solid Immersion Microscopy: Effects of Forbidden Light on Subsurface Imaging (Open Access, Publisher’s Version)

    DTIC Science & Technology

    2014-03-24

    of the aSIL microscopy for semiconductor failure analysis and is applicable to imaging in quantum optics [18], biophotonics [19] and metrology [20...is usually of interest, the model can be adapted to applications in fields such as quantum optics and biophotonics for which the non-resonant

  5. ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings

    NASA Astrophysics Data System (ADS)

    Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.

    2002-12-01

    We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.

  6. New perspective on single-radiator multiple-port antennas for adaptive beamforming applications

    PubMed Central

    Choo, Hosung

    2017-01-01

    One of the most challenging problems in recent antenna engineering fields is to achieve highly reliable beamforming capabilities in an extremely restricted space of small handheld devices. In this paper, we introduce a new perspective on single-radiator multiple-port (SRMP) antenna to alter the traditional approach of multiple-antenna arrays for improving beamforming performances with reduced aperture sizes. The major contribution of this paper is to demonstrate the beamforming capability of the SRMP antenna for use as an extremely miniaturized front-end component in more sophisticated beamforming applications. To examine the beamforming capability, the radiation properties and the array factor of the SRMP antenna are theoretically formulated for electromagnetic characterization and are used as complex weights to form adaptive array patterns. Then, its fundamental performance limits are rigorously explored through enumerative studies by varying the dielectric constant of the substrate, and field tests are conducted using a beamforming hardware to confirm the feasibility. The results demonstrate that the new perspective of the SRMP antenna allows for improved beamforming performances with the ability of maintaining consistently smaller aperture sizes compared to the traditional multiple-antenna arrays. PMID:29023493

  7. The numerical simulation tool for the MAORY multiconjugate adaptive optics system

    NASA Astrophysics Data System (ADS)

    Arcidiacono, C.; Schreiber, L.; Bregoli, G.; Diolaiti, E.; Foppiani, I.; Agapito, G.; Puglisi, A.; Xompero, M.; Oberti, S.; Cosentino, G.; Lombini, M.; Butler, R. C.; Ciliegi, P.; Cortecchia, F.; Patti, M.; Esposito, S.; Feautrier, P.

    2016-07-01

    The Multiconjugate Adaptive Optics RelaY (MAORY) is and Adaptive Optics module to be mounted on the ESO European-Extremely Large Telescope (E-ELT). It is an hybrid Natural and Laser Guide System that will perform the correction of the atmospheric turbulence volume above the telescope feeding the Multi-AO Imaging Camera for Deep Observations Near Infrared spectro-imager (MICADO). We developed an end-to-end Monte- Carlo adaptive optics simulation tool to investigate the performance of a the MAORY and the calibration, acquisition, operation strategies. MAORY will implement Multiconjugate Adaptive Optics combining Laser Guide Stars (LGS) and Natural Guide Stars (NGS) measurements. The simulation tool implement the various aspect of the MAORY in an end to end fashion. The code has been developed using IDL and use libraries in C++ and CUDA for efficiency improvements. Here we recall the code architecture, we describe the modeled instrument components and the control strategies implemented in the code.

  8. MAG3D and its application to internal flowfield analysis

    NASA Technical Reports Server (NTRS)

    Lee, K. D.; Henderson, T. L.; Choo, Y. K.

    1992-01-01

    MAG3D (multiblock adaptive grid, 3D) is a 3D solution-adaptive grid generation code which redistributes grid points to improve the accuracy of a flow solution without increasing the number of grid points. The code is applicable to structured grids with a multiblock topology. It is independent of the original grid generator and the flow solver. The code uses the coordinates of an initial grid and the flow solution interpolated onto the new grid. MAG3D uses a numerical mapping and potential theory to modify the grid distribution based on properties of the flow solution on the initial grid. The adaptation technique is discussed, and the capability of MAG3D is demonstrated with several internal flow examples. Advantages of using solution-adaptive grids are also shown by comparing flow solutions on adaptive grids with those on initial grids.

  9. An approach enabling adaptive FEC for OFDM in fiber-VLLC system

    NASA Astrophysics Data System (ADS)

    Wei, Yiran; He, Jing; Deng, Rui; Shi, Jin; Chen, Shenghai; Chen, Lin

    2017-12-01

    In this paper, we propose an orthogonal circulant matrix transform (OCT)-based adaptive frame-level-forward error correction (FEC) scheme for fiber-visible laser light communication (VLLC) system and experimentally demonstrate by Reed-Solomon (RS) Code. In this method, no extra bits are spent for adaptive message, except training sequence (TS), which is simultaneously used for synchronization and channel estimation. Therefore, RS-coding can be adaptively performed frames by frames via the last received codeword-error-rate (CER) feedback estimated by the TSs of the previous few OFDM frames. In addition, the experimental results exhibit that over 20 km standard single-mode fiber (SSMF) and 8 m visible light transmission, the costs of RS codewords are at most 14.12% lower than those of conventional adaptive subcarrier-RS-code based 16-QAM OFDM at bit error rate (BER) of 10-5.

  10. A study on multiresolution lossless video coding using inter/intra frame adaptive prediction

    NASA Astrophysics Data System (ADS)

    Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro

    2003-06-01

    Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    McMillan, Kyle; Marleau, Peter; Brubaker, Erik

    In coded aperture imaging, one of the most important factors determining the quality of reconstructed images is the choice of mask/aperture pattern. In many applications, uniformly redundant arrays (URAs) are widely accepted as the optimal mask pattern. Under ideal conditions, thin and highly opaque masks, URA patterns are mathematically constructed to provide artifact-free reconstruction however, the number of URAs for a chosen number of mask elements is limited and when highly penetrating particles such as fast neutrons and high-energy gamma-rays are being imaged, the optimum is seldom achieved. In this case more robust mask patterns that provide better reconstructed imagemore » quality may exist. Through the use of heuristic optimization methods and maximum likelihood expectation maximization (MLEM) image reconstruction, we show that for both point and extended neutron sources a random mask pattern can be optimized to provide better image quality than that of a URA.« less

  12. A-7 Aloft Demonstration Flight Test Plan

    DTIC Science & Technology

    1975-09-01

    6095979 72A2130 Power Supply 12 VDC 6095681 72A29 NOC 72A30 ALOFT ASCU Adapter Set 72A3100 ALOFT ASCU Adapter L20-249-1 72A3110 Page assy L Bay and ASCU ...checks will also be performed for each of the following: 3.1.2.1.1 ASCU Codes. Verification will be made that all legal ASCU codes are recognized and...invalid codes inhibit attack mode. A check will also be made to verify that the ASCU codes for pilot-option weapons A-25 enable the retarded weapons

  13. DustPedia: Multiwavelength photometry and imagery of 875 nearby galaxies in 42 ultraviolet-microwave bands

    NASA Astrophysics Data System (ADS)

    Clark, C. J. R.; Verstocken, S.; Bianchi, S.; Fritz, J.; Viaene, S.; Smith, M. W. L.; Baes, M.; Casasola, V.; Cassara, L. P.; Davies, J. I.; De Looze, I.; De Vis, P.; Evans, R.; Galametz, M.; Jones, A. P.; Lianou, S.; Madden, S.; Mosenkov, A. V.; Xilouris, M.

    2018-01-01

    Aims: The DustPedia project is capitalising on the legacy of the Herschel Space Observatory, using cutting-edge modelling techniques to study dust in the 875 DustPedia galaxies - representing the vast majority of extended galaxies within 3000 km s-1 that were observed by Herschel. This work requires a database of multiwavelength imagery and photometry that greatly exceeds the scope (in terms of wavelength coverage and number of galaxies) of any previous local-Universe survey. Methods: We constructed a database containing our own custom Herschel reductions, along with standardised archival observations from GALEX, SDSS, DSS, 2MASS, WISE, Spitzer, and Planck. Using these data, we performed consistent aperture-matched photometry, which we combined with external supplementary photometry from IRAS and Planck. Results: We present our multiwavelength imagery and photometry across 42 UV-microwave bands for the 875 DustPedia galaxies. Our aperture-matched photometry, combined with the external supplementary photometry, represents a total of 21 857 photometric measurements. A typical DustPedia galaxy has multiwavelength photometry spanning 25 bands. We also present the Comprehensive & Adaptable Aperture Photometry Routine (CAAPR), the pipeline we developed to carry out our aperture-matched photometry. CAAPR is designed to produce consistent photometry for the enormous range of galaxy and observation types in our data. In particular, CAAPR is able to determine robust cross-compatible uncertainties, thanks to a novel method for reliably extrapolating the aperture noise for observations that cover a very limited amount of background. Our rich database of imagery and photometry is being made available to the community. Photometry data tables are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/609/A37

  14. Adaptive grid embedding for the two-dimensional flux-split Euler equations. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Warren, Gary Patrick

    1990-01-01

    A numerical algorithm is presented for solving the 2-D flux-split Euler equations using a multigrid method with adaptive grid embedding. The method uses an unstructured data set along with a system of pointers for communication on the irregularly shaped grid topologies. An explicit two-stage time advancement scheme is implemented. A multigrid algorithm is used to provide grid level communication and to accelerate the convergence of the solution to steady state. Results are presented for a subcritical airfoil and a transonic airfoil with 3 levels of adaptation. Comparisons are made with a structured upwind Euler code which uses the same flux integration techniques of the present algorithm. Good agreement is obtained with converged surface pressure coefficients. The lift coefficients of the adaptive code are within 2 1/2 percent of the structured code for the sub-critical case and within 4 1/2 percent of the structured code for the transonic case using approximately one-third the number of grid points.

  15. Improve load balancing and coding efficiency of tiles in high efficiency video coding by adaptive tile boundary

    NASA Astrophysics Data System (ADS)

    Chan, Chia-Hsin; Tu, Chun-Chuan; Tsai, Wen-Jiin

    2017-01-01

    High efficiency video coding (HEVC) not only improves the coding efficiency drastically compared to the well-known H.264/AVC but also introduces coding tools for parallel processing, one of which is tiles. Tile partitioning is allowed to be arbitrary in HEVC, but how to decide tile boundaries remains an open issue. An adaptive tile boundary (ATB) method is proposed to select a better tile partitioning to improve load balancing (ATB-LoadB) and coding efficiency (ATB-Gain) with a unified scheme. Experimental results show that, compared to ordinary uniform-space partitioning, the proposed ATB can save up to 17.65% of encoding times in parallel encoding scenarios and can reduce up to 0.8% of total bit rates for coding efficiency.

  16. More About Vector Adaptive/Predictive Coding Of Speech

    NASA Technical Reports Server (NTRS)

    Jedrey, Thomas C.; Gersho, Allen

    1992-01-01

    Report presents additional information about digital speech-encoding and -decoding system described in "Vector Adaptive/Predictive Encoding of Speech" (NPO-17230). Summarizes development of vector adaptive/predictive coding (VAPC) system and describes basic functions of algorithm. Describes refinements introduced enabling receiver to cope with errors. VAPC algorithm implemented in integrated-circuit coding/decoding processors (codecs). VAPC and other codecs tested under variety of operating conditions. Tests designed to reveal effects of various background quiet and noisy environments and of poor telephone equipment. VAPC found competitive with and, in some respects, superior to other 4.8-kb/s codecs and other codecs of similar complexity.

  17. SimTrack: A compact c++ code for particle orbit and spin tracking in accelerators

    DOE PAGES

    Luo, Yun

    2015-08-29

    SimTrack is a compact c++ code of 6-d symplectic element-by-element particle tracking in accelerators originally designed for head-on beam–beam compensation simulation studies in the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. It provides a 6-d symplectic orbit tracking with the 4th order symplectic integration for magnet elements and the 6-d symplectic synchro-beam map for beam–beam interaction. Since its inception in 2009, SimTrack has been intensively used for dynamic aperture calculations with beam–beam interaction for RHIC. Recently, proton spin tracking and electron energy loss due to synchrotron radiation were added. In this article, I will present the code architecture,more » physics models, and some selected examples of its applications to RHIC and a future electron-ion collider design eRHIC.« less

  18. A European mobile satellite system concept exploiting CDMA and OBP

    NASA Technical Reports Server (NTRS)

    Vernucci, A.; Craig, A. D.

    1993-01-01

    This paper describes a novel Land Mobile Satellite System (LMSS) concept applicable to networks allowing access to a large number of gateway stations ('Hubs'), utilizing low-cost Very Small Aperture Terminals (VSAT's). Efficient operation of the Forward-Link (FL) repeater can be achieved by adopting a synchronous Code Division Multiple Access (CDMA) technique, whereby inter-code interference (self-noise) is virtually eliminated by synchronizing orthogonal codes. However, with a transparent FL repeater, the requirements imposed by the highly decentralized ground segment can lead to significant efficiency losses. The adoption of a FL On-Board Processing (OBP) repeater is proposed as a means of largely recovering this efficiency impairment. The paper describes the network architecture, the system design and performance, the OBP functions and impact on implementation. The proposed concept, applicable to a future generation of the European LMSS, was developed in the context of a European Space Agency (ESA) study contract.

  19. A comparison between using incoherent or coherent sources to align and test an adaptive optical telescope

    NASA Technical Reports Server (NTRS)

    Anderson, Richard

    1994-01-01

    The concept in the initial alignment of the segmented mirror adaptive optics telescope called the phased array mirror extendable large aperture telescope (Pamela) is to produce an optical transfer function (OTF) which closely approximates the diffraction limited value which would correspond to a system pupil function that is unity over the aperture and zero outside. There are differences in the theory of intensity measurements between coherent and incoherent radiation. As a result, some of the classical quantities which describe the performance of an optical system for incoherent radiation can not be defined for a coherent field. The most important quantity describing the quality of an optical system is the OTF and for a coherent source the OTF is not defined. Instead a coherent transfer function (CTF) is defined. The main conclusion of the paper is that an incoherent collimated source and not a collimated laser source is preferred to calibrate the Hartmann wavefront sensor (WFS) of an aligned adaptive optical system. A distant laser source can be used with minimum problems to correct the system for atmospheric turbulence. The collimation of the HeNe laser alignment source can be improved by using a very small pin hole in the spatial filter so only the central portion of the beam is transmitted and the beam from the filter is nearly constant in amplitude. The size of this pin hole will be limited by the sensitivity of the lateral effect diode (LEDD) elements.

  20. Adaptive Wavelet Coding Applied in a Wireless Control System.

    PubMed

    Gama, Felipe O S; Silveira, Luiz F Q; Salazar, Andrés O

    2017-12-13

    Wireless control systems can sense, control and act on the information exchanged between the wireless sensor nodes in a control loop. However, the exchanged information becomes susceptible to the degenerative effects produced by the multipath propagation. In order to minimize the destructive effects characteristic of wireless channels, several techniques have been investigated recently. Among them, wavelet coding is a good alternative for wireless communications for its robustness to the effects of multipath and its low computational complexity. This work proposes an adaptive wavelet coding whose parameters of code rate and signal constellation can vary according to the fading level and evaluates the use of this transmission system in a control loop implemented by wireless sensor nodes. The performance of the adaptive system was evaluated in terms of bit error rate (BER) versus E b / N 0 and spectral efficiency, considering a time-varying channel with flat Rayleigh fading, and in terms of processing overhead on a control system with wireless communication. The results obtained through computational simulations and experimental tests show performance gains obtained by insertion of the adaptive wavelet coding in a control loop with nodes interconnected by wireless link. These results enable the use of this technique in a wireless link control loop.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russell E. Feder and Mahmoud Z. Youssef

    Neutronics analysis to find nuclear heating rates and personnel dose rates were conducted in support of the integration of diagnostics in to the ITER Upper Port Plugs. Simplified shielding models of the Visible-Infrared diagnostic and of a large aperture diagnostic were incorporated in to the ITER global CAD model. Results for these systems are representative of typical designs with maximum shielding and a small aperture (Vis-IR) and minimal shielding with a large aperture. The neutronics discrete-ordinates code ATTILA® and SEVERIAN® (the ATTILA parallel processing version) was used. Material properties and the 500 MW D-T volume source were taken from themore » ITER “Brand Model” MCNP benchmark model. A biased quadrature set equivelant to Sn=32 and a scattering degree of Pn=3 were used along with a 46-neutron and 21-gamma FENDL energy subgrouping. Total nuclear heating (neutron plug gamma heating) in the upper port plugs ranged between 380 and 350 kW for the Vis-IR and Large Aperture cases. The Large Aperture model exhibited lower total heating but much higher peak volumetric heating on the upper port plug structure. Personnel dose rates are calculated in a three step process involving a neutron-only transport calculation, the generation of activation volume sources at pre-defined time steps and finally gamma transport analyses are run for selected time steps. ANSI-ANS 6.1.1 1977 Flux-to-Dose conversion factors were used. Dose rates were evaluated for 1 full year of 500 MW DT operation which is comprised of 3000 1800-second pulses. After one year the machine is shut down for maintenance and personnel are permitted to access the diagnostic interspace after 2-weeks if dose rates are below 100 μSv/hr. Dose rates in the Visible-IR diagnostic model after one day of shutdown were 130 μSv/hr but fell below the limit to 90 μSv/hr 2-weeks later. The Large Aperture style shielding model exhibited higher and more persistent dose rates. After 1-day the dose rate was 230 μSv/hr but was still at 120 μSv/hr 4-weeks later.« less

  2. Mistranslation: from adaptations to applications.

    PubMed

    Hoffman, Kyle S; O'Donoghue, Patrick; Brandl, Christopher J

    2017-11-01

    The conservation of the genetic code indicates that there was a single origin, but like all genetic material, the cell's interpretation of the code is subject to evolutionary pressure. Single nucleotide variations in tRNA sequences can modulate codon assignments by altering codon-anticodon pairing or tRNA charging. Either can increase translation errors and even change the code. The frozen accident hypothesis argued that changes to the code would destabilize the proteome and reduce fitness. In studies of model organisms, mistranslation often acts as an adaptive response. These studies reveal evolutionary conserved mechanisms to maintain proteostasis even during high rates of mistranslation. This review discusses the evolutionary basis of altered genetic codes, how mistranslation is identified, and how deviations to the genetic code are exploited. We revisit early discoveries of genetic code deviations and provide examples of adaptive mistranslation events in nature. Lastly, we highlight innovations in synthetic biology to expand the genetic code. The genetic code is still evolving. Mistranslation increases proteomic diversity that enables cells to survive stress conditions or suppress a deleterious allele. Genetic code variants have been identified by genome and metagenome sequence analyses, suppressor genetics, and biochemical characterization. Understanding the mechanisms of translation and genetic code deviations enables the design of new codes to produce novel proteins. Engineering the translation machinery and expanding the genetic code to incorporate non-canonical amino acids are valuable tools in synthetic biology that are impacting biomedical research. This article is part of a Special Issue entitled "Biochemistry of Synthetic Biology - Recent Developments" Guest Editor: Dr. Ilka Heinemann and Dr. Patrick O'Donoghue. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abdel-Khalik, Hany S.; Turinsky, Paul J.

    2005-07-15

    Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intentmore » is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application.« less

  4. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  5. A biological inspired fuzzy adaptive window median filter (FAWMF) for enhancing DNA signal processing.

    PubMed

    Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin

    2017-10-01

    Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Finding your way through EOL challenges in the ICU using Adaptive Leadership behaviours: A qualitative descriptive case study.

    PubMed

    Adams, Judith A; Bailey, Donald E; Anderson, Ruth A; Thygeson, Marcus

    2013-12-01

    Using the Adaptive Leadership framework, we describe behaviours that providers used while interacting with family members facing the challenges of recognising that their loved one was dying in the ICU. In this prospective pilot case study, we selected one ICU patient with end-stage illness who lacked decision-making capacity. Participants included four family members, one nurse and two physicians. The principle investigator observed and recorded three family conferences and conducted one in-depth interview with the family. Three members of the research team independently coded the transcripts using a priori codes to describe the Adaptive Leadership behaviours that providers used to facilitate the family's adaptive work, met to compare and discuss the codes and resolved all discrepancies. We identified behaviours used by nurses and physicians that facilitated the family's ability to adapt to the impending death of a loved one. Examples of these behaviours include defining the adaptive challenges for families and foreshadowing a poor prognosis. Nurse and physician Adaptive Leadership behaviours can facilitate the transition from curative to palliative care by helping family members do the adaptive work of letting go. Further research is warranted to create knowledge for providers to help family members adapt. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  8. Push Type Fastener

    NASA Technical Reports Server (NTRS)

    Jackson, Steven A. (Inventor)

    1996-01-01

    A push type fastener for fastening a movable structural part to a fixed structural part, wherein the coupling and decoupling actions are both a push type operation, the fastener consisting of a plunger having a shank with a plunger head at one end and a threaded end portion at the other end, an expandable grommet adapted to receive the plunger shank there through, and an attachable head which is securable to the threaded end of the plunger shank. The fastener requires each structural part to be provided with an aperture and the attachable head to be smaller than the aperture in the second structural part. The plunger is extensible through the grommet and is structurally configured with an external camming surface which is cooperatively engageable with internal surfaces of the grommet so that when the plunger is inserted in the grommet, the relative positioning of said cooperable camming surfaces determines the expansion of the grommet. Coupling of the parts is effected when the grommet is inserted in the aperture in the fixed structural part and expanded by pushing the plunger head and plunger at least a minimal distance through the grommet. Decoupling is effected by pushing the attachable head.

  9. Variation in waterlogging-triggered stomatal behavior contributes to changes in the cold acclimation process in prehardened Lolium perenne and Festuca pratensis.

    PubMed

    Jurczyk, Barbara; Pociecha, Ewa; Janowiak, Franciszek; Kabała, Dawid; Rapacz, Marcin

    2016-12-01

    According to predicted changes in climate, waterlogging events may occur more frequently in the future during autumn and winter at high latitudes of the Northern Hemisphere. If excess soil water coincides with the process of cold acclimation for plants, winter survival may potentially be affected. The effects of waterlogging during cold acclimation on stomatal aperture, relative water content, photochemical activity of photosystem II, freezing tolerance and plant regrowth after freezing were compared for two prehardened overwintering forage grasses, Lolium perenne and Festuca pratensis. The experiment was performed to test the hypothesis that changes in photochemical activity initiated by waterlogging-triggered modifications in the stomatal aperture contribute to changes in freezing tolerance. Principal component analysis showed that waterlogging activated different adaptive strategies in the two species studied. The increased freezing tolerance of F. pratensis was associated with increased photochemical activity connected with stomatal opening, whereas freezing tolerance of L. perenne was associated with a decrease in stomatal aperture. In conclusion, waterlogging-triggered stomatal behavior contributed to the efficiency of the cold acclimation process in L. perenne and F. pratensis. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  10. Technical Note: A fast online adaptive replanning method for VMAT using flattening filter free beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ates, Ozgur; Ahunbay, Ergun E.; Li, X. Allen, E-mail: ali@mcw.edu

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT research planning system, which enables the input and output of beam and machine parameters of VMAT plans. The SAM algorithm was used to modify multileaf collimator positions for each segment aperture based on the changes of the target from the planning (CT/MR) to daily image [CT/CBCT/magnetic resonance imaging (MRI)]. The leaf travel distance was controlled for large shifts to prevent themore » increase of VMAT delivery time. The SAM algorithm was tested for 11 patient cases including prostate, pancreatic, and lung cancers. For each daily image set, three types of VMAT plans, image-guided radiation therapy (IGRT) repositioning, SAM adaptive, and full-scope reoptimization plans, were generated and compared. Results: The SAM adaptive plans were found to have improved the plan quality in target and/or critical organs when compared to the IGRT repositioning plans and were comparable to the reoptimization plans based on the data of planning target volume (PTV)-V100 (volume covered by 100% of prescription dose). For the cases studied, the average PTV-V100 was 98.85% ± 1.13%, 97.61% ± 1.45%, and 92.84% ± 1.61% with FFF beams for the reoptimization, SAM adaptive, and repositioning plans, respectively. The execution of the SAM algorithm takes less than 10 s using 16-CPU (2.6 GHz dual core) hardware. Conclusions: The SAM algorithm can generate adaptive VMAT plans using FFF beams with comparable plan qualities as those from the full-scope reoptimization plans based on daily CT/CBCT/MRI and can be used for online replanning to address interfractional variations.« less

  11. In vivo imaging of retinal pigment epithelium cells in age related macular degeneration

    PubMed Central

    Rossi, Ethan A.; Rangel-Fonseca, Piero; Parkins, Keith; Fischer, William; Latchney, Lisa R.; Folwell, Margaret A.; Williams, David R.; Dubra, Alfredo; Chung, Mina M.

    2013-01-01

    Morgan and colleagues demonstrated that the RPE cell mosaic can be resolved in the living human eye non-invasively by imaging the short-wavelength autofluorescence using an adaptive optics (AO) ophthalmoscope. This method, based on the assumption that all subjects have the same longitudinal chromatic aberration (LCA) correction, has proved difficult to use in diseased eyes, and in particular those affected by age-related macular degeneration (AMD). In this work, we improve Morgan’s method by accounting for chromatic aberration variations by optimizing the confocal aperture axial and transverse placement through an automated iterative maximization of image intensity. The increase in image intensity after algorithmic aperture placement varied depending upon patient and aperture position prior to optimization but increases as large as a factor of 10 were observed. When using a confocal aperture of 3.4 Airy disks in diameter, images were obtained using retinal radiant exposures of less than 2.44 J/cm2, which is ~22 times below the current ANSI maximum permissible exposure. RPE cell morphologies that were strikingly similar to those seen in postmortem histological studies were observed in AMD eyes, even in areas where the pattern of fluorescence appeared normal in commercial fundus autofluorescence (FAF) images. This new method can be used to study RPE morphology in AMD and other diseases, providing a powerful tool for understanding disease pathogenesis and progression, and offering a new means to assess the efficacy of treatments designed to restore RPE health. PMID:24298413

  12. Partially coherent X-ray wavefront propagation simulations including grazing-incidence focusing optics.

    PubMed

    Canestrari, Niccolo; Chubar, Oleg; Reininger, Ruben

    2014-09-01

    X-ray beamlines in modern synchrotron radiation sources make extensive use of grazing-incidence reflective optics, in particular Kirkpatrick-Baez elliptical mirror systems. These systems can focus the incoming X-rays down to nanometer-scale spot sizes while maintaining relatively large acceptance apertures and high flux in the focused radiation spots. In low-emittance storage rings and in free-electron lasers such systems are used with partially or even nearly fully coherent X-ray beams and often target diffraction-limited resolution. Therefore, their accurate simulation and modeling has to be performed within the framework of wave optics. Here the implementation and benchmarking of a wave-optics method for the simulation of grazing-incidence mirrors based on the local stationary-phase approximation or, in other words, the local propagation of the radiation electric field along geometrical rays, is described. The proposed method is CPU-efficient and fully compatible with the numerical methods of Fourier optics. It has been implemented in the Synchrotron Radiation Workshop (SRW) computer code and extensively tested against the geometrical ray-tracing code SHADOW. The test simulations have been performed for cases without and with diffraction at mirror apertures, including cases where the grazing-incidence mirrors can be hardly approximated by ideal lenses. Good agreement between the SRW and SHADOW simulation results is observed in the cases without diffraction. The differences between the simulation results obtained by the two codes in diffraction-dominated cases for illumination with fully or partially coherent radiation are analyzed and interpreted. The application of the new method for the simulation of wavefront propagation through a high-resolution X-ray microspectroscopy beamline at the National Synchrotron Light Source II (Brookhaven National Laboratory, USA) is demonstrated.

  13. Polarization-dependent atomic dipole traps behind a circular aperture for neutral-atom quantum computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gillen-Christandl, Katharina; Copsey, Bert D.

    2011-02-15

    The neutral-atom quantum computing community has successfully implemented almost all necessary steps for constructing a neutral-atom quantum computer. We present computational results of a study aimed at solving the remaining problem of creating a quantum memory with individually addressable sites for quantum computing. The basis of this quantum memory is the diffraction pattern formed by laser light incident on a circular aperture. Very close to the aperture, the diffraction pattern has localized bright and dark spots that can serve as red-detuned or blue-detuned atomic dipole traps. These traps are suitable for quantum computing even for moderate laser powers. In particular,more » for moderate laser intensities ({approx}100 W/cm{sup 2}) and comparatively small detunings ({approx}1000-10 000 linewidths), trap depths of {approx}1 mK and trap frequencies of several to tens of kilohertz are achieved. Our results indicate that these dipole traps can be moved by tilting the incident laser beams without significantly changing the trap properties. We also explored the polarization dependence of these dipole traps. We developed a code that calculates the trapping potential energy for any magnetic substate of any hyperfine ground state of any alkali-metal atom for any laser detuning much smaller than the fine-structure splitting for any given electric field distribution. We describe details of our calculations and include a summary of different notations and conventions for the reduced matrix element and how to convert it to SI units. We applied this code to these traps and found a method for bringing two traps together and apart controllably without expelling the atoms from the trap and without significant tunneling probability between the traps. This approach can be scaled up to a two-dimensional array of many pinholes, forming a quantum memory with single-site addressability, in which pairs of atoms can be brought together and apart for two-qubit gates for quantum computing.« less

  14. The HIBEAM Manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William M.

    2000-02-01

    HIBEAM is a 2 1/2D particle-in-cell (PIC) simulation code developed in the late 1990's in the Heavy-Ion Fusion research program at Lawrence Berkeley National Laboratory. The major purpose of HIBEAM is to simulate the transverse (i.e., X-Y) dynamics of a space-charge-dominated, non-relativistic heavy-ion beam being transported in a static accelerator focusing lattice. HIBEAM has been used to study beam combining systems, effective dynamic apertures in electrostatic quadrupole lattices, and emittance growth due to transverse misalignments. At present, HIBEAM runs on the CRAY vector machines (C90 and J90's) at NERSC, although it would be relatively simple to port the code tomore » UNIX workstations so long as IMSL math routines were available.« less

  15. Space Science

    NASA Image and Video Library

    1995-06-08

    Scientists at Marshall's Adaptive Optics Lab demonstrate the Wave Front Sensor alignment using the Phased Array Mirror Extendible Large Aperture (PAMELA) optics adjustment. The primary objective of the PAMELA project is to develop methods for aligning and controlling adaptive optics segmented mirror systems. These systems can be used to acquire or project light energy. The Next Generation Space Telescope is an example of an energy acquisition system that will employ segmented mirrors. Light projection systems can also be used for power beaming and orbital debris removal. All segmented optical systems must be adjusted to provide maximum performance. PAMELA is an on going project that NASA is utilizing to investigate various methods for maximizing system performance.

  16. Optomechanical design and analysis of a self-adaptive mounting method for optimizing phase matching of large potassium dihydrogen phosphate converter

    NASA Astrophysics Data System (ADS)

    Zhang, Zheng; Tian, Menjiya; Quan, Xusong; Pei, Guoqing; Wang, Hui; Liu, Tianye; Long, Kai; Xiong, Zhao; Rong, Yiming

    2017-11-01

    Surface control and phase matching of large laser conversion optics are urgent requirements and huge challenges in high-power solid-state laser facilities. A self-adaptive, nanocompensating mounting configuration of a large aperture potassium dihydrogen phosphate (KDP) frequency doubler is proposed based on a lever-type surface correction mechanism. A mechanical, numerical, and optical model is developed and employed to evaluate comprehensive performance of this mounting method. The results validate the method's advantages of surface adjustment and phase matching improvement. In addition, the optimal value of the modulation force is figured out through a series of simulations and calculations.

  17. Dual output variable pitch turbofan actuation system

    NASA Technical Reports Server (NTRS)

    Griswold, R. H., Jr.; Broman, C. L. (Inventor)

    1976-01-01

    An improved actuating mechanism was provided for a gas turbine engine incorporating fan blades of the variable pitch variety, the actuator adapted to rotate the individual fan blades within apertures in an associated fan disc. The actuator included means such as a pair of synchronizing ring gears, one on each side of the blade shanks, and adapted to engage pinions disposed thereon. Means were provided to impart rotation to the ring gears in opposite directions to effect rotation of the blade shanks in response to a predetermined input signal. In the event of system failure, a run-away actuator was prevented by an improved braking device which arrests the mechanism.

  18. Application of Adaptive Beamforming to Signal Observations at the Mt. Meron Array, Israel

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harris, D. B.

    2010-06-07

    The Mt. Meron array consists of 16 stations spanning an aperture of 3-4 kilometers in northern Israel. The array is situated in a region of substantial topographic relief, and is surrounded by settlements at close range (Figure 1). Consequently the level of noise at the array is high, which requires efforts at mitigation if distant regional events of moderate magnitude are to be observed. This note describes an initial application of two classic adaptive beamforming algorithms to data from the array to observe P waves from 5 events east of the array ranging in distance from 1100- 2150 kilometers.

  19. A Numerical Study on Small-Scale Permeability Creation Associated with Fluid Pressure Induced Inelastic Shearing

    NASA Astrophysics Data System (ADS)

    Vogler, D.; Amann, F.; Bayer, P.

    2014-12-01

    Anthropogenic perturbations in a rock mass at great depth cause a complex thermal-hydro-mechanical (THM) response. This is of particular relevance when dealing with enhanced geothermal systems (EGS) and unconventional oil and gas recovery utilizing hydraulic fracturing. Studying the key THM coupled processes associated with specific reservoir characteristics in an EGS are of foremost relevance to establish a heat exchanger able to achieve the target production rate.Many reservoirs are naturally low permeable, and the target permeability can only be achieved through the creation of new fractures or inelastic and dilatant shearing of pre-existing discontinuities. The latter process, which is considered to irreversibly increase the apertures of pre-existing discontinuities, has been shown to be especially important for EGS. Common constitutive equations linking the change in hydraulic aperture and the change in mechanical aperture are based on the basic formulation of the cubic law, which linearly relates the flow rate in a fracture to the pressure gradient. However, HM-coupled laboratory investigations demonstrate, that the relation between the mechanical and the hydraulic aperture as assumed in the cubic law, is not valid when dealing with very small initial apertures, which are likely to occur at great depth. In a current study, we investigate the relevance of this discrepancy for the early stage of permeability creation in an EGS, where massive fluid injections trigger largely irreversible in-elastic shearing of critically stressed discontinuities. Understanding small-scale effects in fractures in EGS during fluid injection is crucial to predict reservoir fluid production rates and seismic events.Our study aims to implement an empirical constitutive law in an existing discrete fracture code, and calibrate this against experimental data showing the irreversible shearing induced permeability changes. This empirical relation will later be used to quantify the relevance of uncertainties in reservoir characterisation such as discrete fracture networks (DFN) and in-situ state of stress.

  20. MANUAL FOR OPERATIONAL DOCUMENTARY PHOTOGRAPHY (ODP)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendrix, V.V.

    1963-12-23

    The SL-1 incident showed the need for pre-incident photographs of the facility to aid in rescue or recovery operations. A system of documentary photographic coverage was developed to fill this need for all the NRTS reactors and facilities. In this system, aperture cards with photographic negatives are used, and the cards are coded with respect to facility, room, floor, angle, and other variables. Operational planning for documentary photographs and updating of the cards are discussed. (D.L.C.)

  1. Hard x ray imaging graphics development and literature search

    NASA Technical Reports Server (NTRS)

    Emslie, A. Gordon

    1991-01-01

    This report presents work performed between June 1990 and June 1991 and has the following objectives: (1) a comprehensive literature search of imaging technology and coded aperture imaging as well as relevant topics relating to solar flares; (2) an analysis of random number generators; and (3) programming simulation models of hard x ray telescopes. All programs are compatible with NASA/MSFC Space Science LAboratory VAX Cluster and are written in VAX FORTRAN and VAX IDL (Interactive Data Language).

  2. Swift/BAT Calibration and Spectral Response

    NASA Technical Reports Server (NTRS)

    Parsons, A.

    2004-01-01

    The Burst Alert Telescope (BAT) aboard NASA#s Swift Gamma-Ray Burst Explorer is a large coded aperture gamma-ray telescope consisting of a 2.4 m (8#) x 1.2 m (4#) coded aperture mask supported 1 meter above a 5200 square cm area detector plane containing 32,768 individual 4 mm x 4 mm x 2 mm CZT detectors. The BAT is now completely assembled and integrated with the Swift spacecraft in anticipation of an October 2004 launch. Extensive ground calibration measurements using a variety of radioactive sources have resulted in a moderately high fidelity model for the BAT spectral and photometric response. This paper describes these ground calibration measurements as well as related computer simulations used to study the efficiency and individual detector properties of the BAT detector array. The creation of a single spectral response model representative of the fully integrated BAT posed an interesting challenge and is at the heart of the public analysis tool #batdrmgen# which computes a response matrix for any given sky position within the BAT FOV. This paper will describe the batdrmgen response generator tool and conclude with a description of the on-orbit calibration plans as well as plans for the future improvements needed to produce the more detailed spectral response model that is required for the construction of an all-sky hard x-ray survey.

  3. Scalable gamma-ray camera for wide-area search based on silicon photomultipliers array

    NASA Astrophysics Data System (ADS)

    Jeong, Manhee; Van, Benjamin; Wells, Byron T.; D'Aries, Lawrence J.; Hammig, Mark D.

    2018-03-01

    Portable coded-aperture imaging systems based on scintillators and semiconductors have found use in a variety of radiological applications. For stand-off detection of weakly emitting materials, large volume detectors can facilitate the rapid localization of emitting materials. We describe a scalable coded-aperture imaging system based on 5.02 × 5.02 cm2 CsI(Tl) scintillator modules, each partitioned into 4 × 4 × 20 mm3 pixels that are optically coupled to 12 × 12 pixel silicon photo-multiplier (SiPM) arrays. The 144 pixels per module are read-out with a resistor-based charge-division circuit that reduces the readout outputs from 144 to four signals per module, from which the interaction position and total deposited energy can be extracted. All 144 CsI(Tl) pixels are readily distinguishable with an average energy resolution, at 662 keV, of 13.7% FWHM, a peak-to-valley ratio of 8.2, and a peak-to-Compton ratio of 2.9. The detector module is composed of a SiPM array coupled with a 2 cm thick scintillator and modified uniformly redundant array mask. For the image reconstruction, cross correlation and maximum likelihood expectation maximization methods are used. The system shows a field of view of 45° and an angular resolution of 4.7° FWHM.

  4. Engineering design of the Regolith X-ray Imaging Spectrometer (REXIS) instrument: an OSIRIS-REx student collaboration

    NASA Astrophysics Data System (ADS)

    Jones, Michael; Chodas, Mark; Smith, Matthew J.; Masterson, Rebecca A.

    2014-07-01

    OSIRIS-REx is a NASA New Frontiers mission scheduled for launch in 2016 that will travel to the asteroid Bennu and return a pristine sample of the asteroid to Earth. The REgolith X-ray Imaging Spectrometer (REXIS) is a student collaboration instrument on-board the OSIRIS-REx spacecraft. REXIS is a NASA risk Class D instrument, and its design and development is largely student led. The engineering team consists of MIT graduate and undergraduate students and staff at the MIT Space Systems Laboratory. The primary goal of REXIS is the education of science and engineering students through participation in the development of light hardware. In light, REXIS will contribute to the mission by providing an elemental abundance map of the asteroid and by characterizing Bennu among the known meteorite groups. REXIS is sensitive to X-rays between 0.5 and 7 keV, and uses coded aperture imaging to map the distribution of iron with 50 m spatial resolution. This paper describes the science goals, concept of operations, and overall engineering design of the REXIS instrument. Each subsystem of the instrument is addressed with a high-level description of the design. Critical design elements such as the Thermal Isolation Layer (TIL), radiation cover, coded-aperture mask, and Detector Assembly Mount (DAM) are discussed in further detail.

  5. Nyx: Adaptive mesh, massively-parallel, cosmological simulation code

    NASA Astrophysics Data System (ADS)

    Almgren, Ann; Beckner, Vince; Friesen, Brian; Lukic, Zarija; Zhang, Weiqun

    2017-12-01

    Nyx code solves equations of compressible hydrodynamics on an adaptive grid hierarchy coupled with an N-body treatment of dark matter. The gas dynamics in Nyx use a finite volume methodology on an adaptive set of 3-D Eulerian grids; dark matter is represented as discrete particles moving under the influence of gravity. Particles are evolved via a particle-mesh method, using Cloud-in-Cell deposition/interpolation scheme. Both baryonic and dark matter contribute to the gravitational field. In addition, Nyx includes physics for accurately modeling the intergalactic medium; in optically thin limits and assuming ionization equilibrium, the code calculates heating and cooling processes of the primordial-composition gas in an ionizing ultraviolet background radiation field.

  6. Adaptation and perceptual norms

    NASA Astrophysics Data System (ADS)

    Webster, Michael A.; Yasuda, Maiko; Haber, Sara; Leonard, Deanne; Ballardini, Nicole

    2007-02-01

    We used adaptation to examine the relationship between perceptual norms--the stimuli observers describe as psychologically neutral, and response norms--the stimulus levels that leave visual sensitivity in a neutral or balanced state. Adapting to stimuli on opposite sides of a neutral point (e.g. redder or greener than white) biases appearance in opposite ways. Thus the adapting stimulus can be titrated to find the unique adapting level that does not bias appearance. We compared these response norms to subjectively defined neutral points both within the same observer (at different retinal eccentricities) and between observers. These comparisons were made for visual judgments of color, image focus, and human faces, stimuli that are very different and may depend on very different levels of processing, yet which share the property that for each there is a well defined and perceptually salient norm. In each case the adaptation aftereffects were consistent with an underlying sensitivity basis for the perceptual norm. Specifically, response norms were similar to and thus covaried with the perceptual norm, and under common adaptation differences between subjectively defined norms were reduced. These results are consistent with models of norm-based codes and suggest that these codes underlie an important link between visual coding and visual experience.

  7. QOS-aware error recovery in wireless body sensor networks using adaptive network coding.

    PubMed

    Razzaque, Mohammad Abdur; Javadi, Saeideh S; Coulibaly, Yahaya; Hira, Muta Tah

    2014-12-29

    Wireless body sensor networks (WBSNs) for healthcare and medical applications are real-time and life-critical infrastructures, which require a strict guarantee of quality of service (QoS), in terms of latency, error rate and reliability. Considering the criticality of healthcare and medical applications, WBSNs need to fulfill users/applications and the corresponding network's QoS requirements. For instance, for a real-time application to support on-time data delivery, a WBSN needs to guarantee a constrained delay at the network level. A network coding-based error recovery mechanism is an emerging mechanism that can be used in these systems to support QoS at very low energy, memory and hardware cost. However, in dynamic network environments and user requirements, the original non-adaptive version of network coding fails to support some of the network and user QoS requirements. This work explores the QoS requirements of WBSNs in both perspectives of QoS. Based on these requirements, this paper proposes an adaptive network coding-based, QoS-aware error recovery mechanism for WBSNs. It utilizes network-level and user-/application-level information to make it adaptive in both contexts. Thus, it provides improved QoS support adaptively in terms of reliability, energy efficiency and delay. Simulation results show the potential of the proposed mechanism in terms of adaptability, reliability, real-time data delivery and network lifetime compared to its counterparts.

  8. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  9. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  10. 43 CFR 11.64 - Injury determination phase-testing and sampling methods.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    .... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...

  11. Low-Gain Circularly Polarized Antenna with Torus-Shaped Pattern

    NASA Technical Reports Server (NTRS)

    Amaro, Luis R.; Kruid, Ronald C.; Vacchione, Joseph D.; Prata, Aluizio

    2012-01-01

    The Juno mission to Jupiter requires an antenna with a torus-shaped antenna pattern with approximately 6 dBic gain and circular polarization over the Deep Space Network (DSN) 7-GHz transmit frequency and the 8-GHz receive frequency. Given the large distances that accumulate en-route to Jupiter and the limited power afforded by the solar-powered vehicle, this toroidal low-gain antenna requires as much gain as possible while maintaining a beam width that could facilitate a +/-10deg edge of coverage. The natural antenna that produces a toroidal antenna pattern is the dipole, but the limited approx. = 2.2 dB peak gain would be insufficient. Here a shaped variation of the standard bicone antenna is proposed that could achieve the required gains and bandwidths while maintaining a size that was not excessive. The final geometry that was settled on consisted of a corrugated, shaped bicone, which is fed by a WR112 waveguide-to-coaxial- waveguide transition. This toroidal low-gain antenna (TLGA) geometry produced the requisite gain, moderate sidelobes, and the torus-shaped antenna pattern while maintaining a very good match over the entire required frequency range. Its "horn" geometry is also low-loss and capable of handling higher powers with large margins against multipactor breakdown. The final requirement for the antenna was to link with the DSN with circular polarization. A four-layer meander-line array polarizer was implemented; an approach that was fairly well suited to the TLGA geometry. The principal development of this work was to adapt the standard linear bicone such that its aperture could be increased in order to increase the available gain of the antenna. As one increases the aperture of a standard bicone, the phase variation across the aperture begins to increase, so the larger the aperture becomes, the greater the phase variation. In order to maximize the gain from any aperture antenna, the phase should be kept as uniform as possible. Thus, as the standard bicone fs aperture increases, the gain increase becomes less until one reaches a point of diminishing returns. In order to overcome this problem, a shaped aperture is used. Rather than the standard linear bicone, a parabolic bicone was found to reduce the amount of phase variation as the aperture increases. In fact, the phase variation is half of the standard linear bicone, which leads to higher gain with smaller aperture sizes. The antenna pattern radiated from this parabolic-shaped bicone antenna has fairly high side lobes. The Juno project requested that these sidelobes be minimized. This was accomplished by adding corrugations to the parabolic shape. This corrugated-shaped bicone antenna had reasonably low sidelobes, and the appropriate gain and beamwidth to meet project requirements.

  12. Adaptive bit plane quadtree-based block truncation coding for image compression

    NASA Astrophysics Data System (ADS)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  13. ALCBEAM - Neutral beam formation and propagation code for beam-based plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Bespamyatnov, I. O.; Rowan, W. L.; Liao, K. T.

    2012-03-01

    ALCBEAM is a new three-dimensional neutral beam formation and propagation code. It was developed to support the beam-based diagnostics installed on the Alcator C-Mod tokamak. The purpose of the code is to provide reliable estimates of the local beam equilibrium parameters: such as beam energy fractions, density profiles and excitation populations. The code effectively unifies the ion beam formation, extraction and neutralization processes with beam attenuation and excitation in plasma and neutral gas and beam stopping by the beam apertures. This paper describes the physical processes interpreted and utilized by the code, along with exploited computational methods. The description is concluded by an example simulation of beam penetration into plasma of Alcator C-Mod. The code is successfully being used in Alcator C-Mod tokamak and expected to be valuable in the support of beam-based diagnostics in most other tokamak environments. Program summaryProgram title: ALCBEAM Catalogue identifier: AEKU_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKU_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 66 459 No. of bytes in distributed program, including test data, etc.: 7 841 051 Distribution format: tar.gz Programming language: IDL Computer: Workstation, PC Operating system: Linux RAM: 1 GB Classification: 19.2 Nature of problem: Neutral beams are commonly used to heat and/or diagnose high-temperature magnetically-confined laboratory plasmas. An accurate neutral beam characterization is required for beam-based measurements of plasma properties. Beam parameters such as density distribution, energy composition, and atomic excited populations of the beam atoms need to be known. Solution method: A neutral beam is initially formed as an ion beam which is extracted from the ion source by high voltage applied to the extraction and accelerating grids. The current distribution of a single beamlet emitted from a single pore of IOS depends on the shape of the plasma boundary in the emission region. Total beam extracted by IOS is calculated at every point of 3D mesh as sum of all contributions from each grid pore. The code effectively unifies the ion beam formation, extraction and neutralization processes with neutral beam attenuation and excitation in plasma and neutral gas and beam stopping by the beam apertures. Running time: 10 min for a standard run.

  14. SU-E-J-57: First Development of Adapting to Intrafraction Relative Motion Between Prostate and Pelvic Lymph Nodes Targets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ge, Y; Colvill, E; O’Brien, R

    2015-06-15

    Purpose Large intrafraction relative motion of multiple targets is common in advanced head and neck, lung, abdominal, gynaecological and urological cancer, jeopardizing the treatment outcomes. The objective of this study is to develop a real-time adaptation strategy, for the first time, to accurately correct for the relative motion of multiple targets by reshaping the treatment field using the multi-leaf collimator (MLC). Methods The principle of tracking the simultaneously treated but differentially moving tumor targets is to determine the new aperture shape that conforms to the shifted targets. Three dimensional volumes representing the individual targets are projected to the beam’s eyemore » view. The leaf openings falling inside each 2D projection will be shifted according to the measured motion of each target to form the new aperture shape. Based on the updated beam shape, new leaf positions will be determined with optimized trade-off between the target underdose and healthy tissue overdose, and considerations of the physical constraints of the MLC. Taking a prostate cancer patient with pelvic lymph node involvement as an example, a preliminary dosimetric study was conducted to demonstrate the potential treatment improvement compared to the state-of- art adaptation technique which shifts the whole beam to track only one target. Results The world-first intrafraction adaptation system capable of reshaping the beam to correct for the relative motion of multiple targets has been developed. The dose in the static nodes and small bowel are closer to the planned distribution and the V45 of small bowel is decreased from 110cc to 75cc, corresponding to a 30% reduction by this technique compared to the state-of-art adaptation technique. Conclusion The developed adaptation system to correct for intrafraction relative motion of multiple targets will guarantee the tumour coverage and thus enable PTV margin reduction to minimize the high target dose to the adjacent organs-at-risk. The authors acknowledge funding support from the Australian NHMRC Australia Fellowship and NHMRC Project Grant No. APP1042375.« less

  15. A generic efficient adaptive grid scheme for rocket propulsion modeling

    NASA Technical Reports Server (NTRS)

    Mo, J. D.; Chow, Alan S.

    1993-01-01

    The objective of this research is to develop an efficient, time-accurate numerical algorithm to discretize the Navier-Stokes equations for the predictions of internal one-, two-dimensional and axisymmetric flows. A generic, efficient, elliptic adaptive grid generator is implicitly coupled with the Lower-Upper factorization scheme in the development of ALUNS computer code. The calculations of one-dimensional shock tube wave propagation and two-dimensional shock wave capture, wave-wave interactions, shock wave-boundary interactions show that the developed scheme is stable, accurate and extremely robust. The adaptive grid generator produced a very favorable grid network by a grid speed technique. This generic adaptive grid generator is also applied in the PARC and FDNS codes and the computational results for solid rocket nozzle flowfield and crystal growth modeling by those codes will be presented in the conference, too. This research work is being supported by NASA/MSFC.

  16. Counter-propagation network with variable degree variable step size LMS for single switch typing recognition.

    PubMed

    Yang, Cheng-Huei; Luo, Ching-Hsing; Yang, Cheng-Hong; Chuang, Li-Yeh

    2004-01-01

    Morse code is now being harnessed for use in rehabilitation applications of augmentative-alternative communication and assistive technology, including mobility, environmental control and adapted worksite access. In this paper, Morse code is selected as a communication adaptive device for disabled persons who suffer from muscle atrophy, cerebral palsy or other severe handicaps. A stable typing rate is strictly required for Morse code to be effective as a communication tool. This restriction is a major hindrance. Therefore, a switch adaptive automatic recognition method with a high recognition rate is needed. The proposed system combines counter-propagation networks with a variable degree variable step size LMS algorithm. It is divided into five stages: space recognition, tone recognition, learning process, adaptive processing, and character recognition. Statistical analyses demonstrated that the proposed method elicited a better recognition rate in comparison to alternative methods in the literature.

  17. Parallel Adaptive Mesh Refinement Library

    NASA Technical Reports Server (NTRS)

    Mac-Neice, Peter; Olson, Kevin

    2005-01-01

    Parallel Adaptive Mesh Refinement Library (PARAMESH) is a package of Fortran 90 subroutines designed to provide a computer programmer with an easy route to extension of (1) a previously written serial code that uses a logically Cartesian structured mesh into (2) a parallel code with adaptive mesh refinement (AMR). Alternatively, in its simplest use, and with minimal effort, PARAMESH can operate as a domain-decomposition tool for users who want to parallelize their serial codes but who do not wish to utilize adaptivity. The package builds a hierarchy of sub-grids to cover the computational domain of a given application program, with spatial resolution varying to satisfy the demands of the application. The sub-grid blocks form the nodes of a tree data structure (a quad-tree in two or an oct-tree in three dimensions). Each grid block has a logically Cartesian mesh. The package supports one-, two- and three-dimensional models.

  18. An object-oriented approach for parallel self adaptive mesh refinement on block structured grids

    NASA Technical Reports Server (NTRS)

    Lemke, Max; Witsch, Kristian; Quinlan, Daniel

    1993-01-01

    Self-adaptive mesh refinement dynamically matches the computational demands of a solver for partial differential equations to the activity in the application's domain. In this paper we present two C++ class libraries, P++ and AMR++, which significantly simplify the development of sophisticated adaptive mesh refinement codes on (massively) parallel distributed memory architectures. The development is based on our previous research in this area. The C++ class libraries provide abstractions to separate the issues of developing parallel adaptive mesh refinement applications into those of parallelism, abstracted by P++, and adaptive mesh refinement, abstracted by AMR++. P++ is a parallel array class library to permit efficient development of architecture independent codes for structured grid applications, and AMR++ provides support for self-adaptive mesh refinement on block-structured grids of rectangular non-overlapping blocks. Using these libraries, the application programmers' work is greatly simplified to primarily specifying the serial single grid application and obtaining the parallel and self-adaptive mesh refinement code with minimal effort. Initial results for simple singular perturbation problems solved by self-adaptive multilevel techniques (FAC, AFAC), being implemented on the basis of prototypes of the P++/AMR++ environment, are presented. Singular perturbation problems frequently arise in large applications, e.g. in the area of computational fluid dynamics. They usually have solutions with layers which require adaptive mesh refinement and fast basic solvers in order to be resolved efficiently.

  19. Integrated-magnetic apparatus

    NASA Technical Reports Server (NTRS)

    Bloom, Gordon E. (Inventor)

    1998-01-01

    Disclosure is made of an integrated-magnetic apparatus, comprising: winding structure for insulatingly carrying at least two generally flat, laterally offset and spaced apart electrical windings of a power converter around an aperture; a core having a flat exterior face, an interior cavity and an un-gapped core-column that is located within the cavity and that passes through the aperture of the winding structure; flat-sided surface carried by the core and forming an interior chamber that is located adjacent to the flat face of the core and forming a core-column that has a gap and that is located within the chamber; and structure, located around the gapped core-column, for carrying a third electrical winding of the power converter. The first two electrical windings are substantially located within the cavity and are adapted to be transformingly coupled together through the core. The third electrical winding is adapted to be inductively coupled through the gapped core-column to the other electrical windings, and is phased to have the magnetic flux passing through the gapped core-column substantially in the same direction as the magnetic flux passing through the un-gapped core-column and to have substantially the same AC components of flux in the gapped core-column and in the un-gapped core-column.

  20. Multiscale sensorless adaptive optics OCT angiography system for in vivo human retinal imaging.

    PubMed

    Ju, Myeong Jin; Heisler, Morgan; Wahl, Daniel; Jian, Yifan; Sarunic, Marinko V

    2017-11-01

    We present a multiscale sensorless adaptive optics (SAO) OCT system capable of imaging retinal structure and vasculature with various fields-of-view (FOV) and resolutions. Using a single deformable mirror and exploiting the polarization properties of light, the SAO-OCT-A was implemented in a compact and easy to operate system. With the ability to adjust the beam diameter at the pupil, retinal imaging was demonstrated at two different numerical apertures with the same system. The general morphological structure and retinal vasculature could be observed with a few tens of micrometer-scale lateral resolution with conventional OCT and OCT-A scanning protocols with a 1.7-mm-diameter beam incident at the pupil and a large FOV (15 deg× 15 deg). Changing the system to a higher numerical aperture with a 5.0-mm-diameter beam incident at the pupil and the SAO aberration correction, the FOV was reduced to 3 deg× 3 deg for fine detailed imaging of morphological structure and microvasculature such as the photoreceptor mosaic and capillaries. Multiscale functional SAO-OCT imaging was performed on four healthy subjects, demonstrating its functionality and potential for clinical utility. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  1. Sealing Assembly for Sealing a Port and the Like

    NASA Technical Reports Server (NTRS)

    Haas, Jon W. (Inventor); Haupt, Charles W. (Inventor)

    2000-01-01

    The sealing assembly for a port of a valve or the like is disclosed. In detail, the sealing assembly includes the port having a circular shaped end with a circular shaped knife-edge thereon. The sealing assembly further includes a hollow cap having a closed first end with an aperture therethrough and an open second end. The cap further includes internal threads adapted to mate with the external threads of the port. A gasket is mounted within the cap having flat first and second principle sides and made of a deformable metal, the first principle side of the gasket for mounting against the circular shaped knife edge of the port. A plunger having a circular shaped disc portion is adapted to fit within the hollow cap and is engagable with the first principle surface of the gasket and includes a shaft portion extending out of the aperture. The cap and shaft of the plunger include external wrenching flats. Thus when the cap is screwed onto the port and the plunger is prevented from rotating by a wrench mounted on the wrenching flats of the shaft portion of the plunger, the gasket is forced into engagement with the knife edge in pure compression and no rotation of the gasket occurs causing the knife edge to locally deform the gasket sealing of the port.

  2. Evolutionary stasis in Euphorbiaceae pollen: selection and constraints.

    PubMed

    Matamoro-Vidal, A; Furness, C A; Gouyon, P-H; Wurdack, K J; Albert, B

    2012-06-01

    Although much attention has been paid to the role of stabilizing selection, empirical analyses testing the role of developmental constraints in evolutionary stasis remain rare, particularly for plants. This topic is studied here with a focus on the evolution of a pollen ontogenetic feature, the last points of callose deposition (LPCD) pattern, involved in the determination of an adaptive morphological pollen character (aperture pattern). The LPCD pattern exhibits a low level of evolution in eudicots, as compared to the evolution observed in monocots. Stasis in this pattern might be explained by developmental constraints expressed during male meiosis (microsporogenesis) or by selective pressures expressed through the adaptive role of the aperture pattern. Here, we demonstrate that the LPCD pattern is conserved in Euphorbiaceae s.s. and that this conservatism is primarily due to selective pressures. A phylogenetic association was found between the putative removal of selective pressures on pollen morphology after the origin of inaperturate pollen, and the appearance of variation in microsporogenesis and in the resulting LPCD pattern, suggesting that stasis was due to these selective pressures. However, even in a neutral context, variation in microsporogenesis was biased. This should therefore favour the appearance of some developmental and morphological phenotypes rather than others. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.

  3. Multigrid solution of internal flows using unstructured solution adaptive meshes

    NASA Technical Reports Server (NTRS)

    Smith, Wayne A.; Blake, Kenneth R.

    1992-01-01

    This is the final report of the NASA Lewis SBIR Phase 2 Contract Number NAS3-25785, Multigrid Solution of Internal Flows Using Unstructured Solution Adaptive Meshes. The objective of this project, as described in the Statement of Work, is to develop and deliver to NASA a general three-dimensional Navier-Stokes code using unstructured solution-adaptive meshes for accuracy and multigrid techniques for convergence acceleration. The code will primarily be applied, but not necessarily limited, to high speed internal flows in turbomachinery.

  4. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  5. Adapting HYDRUS-1D to simulate overland flow and reactive transport during sheet flow deviations

    USDA-ARS?s Scientific Manuscript database

    The HYDRUS-1D code is a popular numerical model for solving the Richards equation for variably-saturated water flow and solute transport in porous media. This code was adapted to solve rather than the Richards equation for subsurface flow the diffusion wave equation for overland flow at the soil sur...

  6. Layer-based buffer aware rate adaptation design for SHVC video streaming

    NASA Astrophysics Data System (ADS)

    Gudumasu, Srinivas; Hamza, Ahmed; Asbun, Eduardo; He, Yong; Ye, Yan

    2016-09-01

    This paper proposes a layer based buffer aware rate adaptation design which is able to avoid abrupt video quality fluctuation, reduce re-buffering latency and improve bandwidth utilization when compared to a conventional simulcast based adaptive streaming system. The proposed adaptation design schedules DASH segment requests based on the estimated bandwidth, dependencies among video layers and layer buffer fullness. Scalable HEVC video coding is the latest state-of-art video coding technique that can alleviate various issues caused by simulcast based adaptive video streaming. With scalable coded video streams, the video is encoded once into a number of layers representing different qualities and/or resolutions: a base layer (BL) and one or more enhancement layers (EL), each incrementally enhancing the quality of the lower layers. Such layer based coding structure allows fine granularity rate adaptation for the video streaming applications. Two video streaming use cases are presented in this paper. The first use case is to stream HD SHVC video over a wireless network where available bandwidth varies, and the performance comparison between proposed layer-based streaming approach and conventional simulcast streaming approach is provided. The second use case is to stream 4K/UHD SHVC video over a hybrid access network that consists of a 5G millimeter wave high-speed wireless link and a conventional wired or WiFi network. The simulation results verify that the proposed layer based rate adaptation approach is able to utilize the bandwidth more efficiently. As a result, a more consistent viewing experience with higher quality video content and minimal video quality fluctuations can be presented to the user.

  7. Identification and Classification of Orthogonal Frequency Division Multiple Access (OFDMA) Signals Used in Next Generation Wireless Systems

    DTIC Science & Technology

    2012-03-01

    advanced antenna systems AMC adaptive modulation and coding AWGN additive white Gaussian noise BPSK binary phase shift keying BS base station BTC ...QAM-16, and QAM-64, and coding types include convolutional coding (CC), convolutional turbo coding (CTC), block turbo coding ( BTC ), zero-terminating

  8. Development and application of the dynamic system doctor to nuclear reactor probabilistic risk assessments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunsman, David Marvin; Aldemir, Tunc; Rutt, Benjamin

    2008-05-01

    This LDRD project has produced a tool that makes probabilistic risk assessments (PRAs) of nuclear reactors - analyses which are very resource intensive - more efficient. PRAs of nuclear reactors are being increasingly relied on by the United States Nuclear Regulatory Commission (U.S.N.R.C.) for licensing decisions for current and advanced reactors. Yet, PRAs are produced much as they were 20 years ago. The work here applied a modern systems analysis technique to the accident progression analysis portion of the PRA; the technique was a system-independent multi-task computer driver routine. Initially, the objective of the work was to fuse the accidentmore » progression event tree (APET) portion of a PRA to the dynamic system doctor (DSD) created by Ohio State University. Instead, during the initial efforts, it was found that the DSD could be linked directly to a detailed accident progression phenomenological simulation code - the type on which APET construction and analysis relies, albeit indirectly - and thereby directly create and analyze the APET. The expanded DSD computational architecture and infrastructure that was created during this effort is called ADAPT (Analysis of Dynamic Accident Progression Trees). ADAPT is a system software infrastructure that supports execution and analysis of multiple dynamic event-tree simulations on distributed environments. A simulator abstraction layer was developed, and a generic driver was implemented for executing simulators on a distributed environment. As a demonstration of the use of the methodological tool, ADAPT was applied to quantify the likelihood of competing accident progression pathways occurring for a particular accident scenario in a particular reactor type using MELCOR, an integrated severe accident analysis code developed at Sandia. (ADAPT was intentionally created with flexibility, however, and is not limited to interacting with only one code. With minor coding changes to input files, ADAPT can be linked to other such codes.) The results of this demonstration indicate that the approach can significantly reduce the resources required for Level 2 PRAs. From the phenomenological viewpoint, ADAPT can also treat the associated epistemic and aleatory uncertainties. This methodology can also be used for analyses of other complex systems. Any complex system can be analyzed using ADAPT if the workings of that system can be displayed as an event tree, there is a computer code that simulates how those events could progress, and that simulator code has switches to turn on and off system events, phenomena, etc. Using and applying ADAPT to particular problems is not human independent. While the human resources for the creation and analysis of the accident progression are significantly decreased, knowledgeable analysts are still necessary for a given project to apply ADAPT successfully. This research and development effort has met its original goals and then exceeded them.« less

  9. Energy and technology review: Engineering modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.

    1986-10-01

    This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.

  10. Current and Density Observations of Packets of Nonlinear Internal Waves on the Outer New Jersey Shelf

    DTIC Science & Technology

    2011-05-01

    NUMBER 0602435N 6. AUTHOR(S) William Teague, Hemantha Wijesekera, W. Avera, Z.R. Hallock 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT...ABSTRACT uu 18. NUMBER OF PAGES 15 19a. NAME OF RESPONSIBLE PERSON William J. Teague 19b. TELEPHONE NUMBER (Include area code) 228-688-4734...satellite synthetic aperture radar (SAR) imagery (Jackson and Apel 2004). NLIWs can have a surface signature de- lectable by both ship and satellite

  11. Thin film concentrator panel development

    NASA Technical Reports Server (NTRS)

    Zimmerman, D. K.

    1982-01-01

    The development and testing of a rigid panel concept that utilizes a thin film reflective surface for application to a low-cost point-focusing solar concentrator is discussed. It is shown that a thin film reflective surface is acceptable for use on solar concentrators, including 1500 F applications. Additionally, it is shown that a formed steel sheet substrate is a good choice for concentrator panels. The panel has good optical properties, acceptable forming tolerances, environmentally resistant substrate and stiffeners, and adaptability to low to mass production rates. Computer simulations of the concentrator optics were run using the selected reflector panel design. Experimentally determined values for reflector surface specularity and reflectivity along with dimensional data were used in the analysis. The simulations provided intercept factor and net energy into the aperture as a function of aperture size for different surface errors and pointing errors. Point source and Sun source optical tests were also performed.

  12. Data compression using adaptive transform coding. Appendix 1: Item 1. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Rost, Martin Christopher

    1988-01-01

    Adaptive low-rate source coders are described in this dissertation. These coders adapt by adjusting the complexity of the coder to match the local coding difficulty of the image. This is accomplished by using a threshold driven maximum distortion criterion to select the specific coder used. The different coders are built using variable blocksized transform techniques, and the threshold criterion selects small transform blocks to code the more difficult regions and larger blocks to code the less complex regions. A theoretical framework is constructed from which the study of these coders can be explored. An algorithm for selecting the optimal bit allocation for the quantization of transform coefficients is developed. The bit allocation algorithm is more fully developed, and can be used to achieve more accurate bit assignments than the algorithms currently used in the literature. Some upper and lower bounds for the bit-allocation distortion-rate function are developed. An obtainable distortion-rate function is developed for a particular scalar quantizer mixing method that can be used to code transform coefficients at any rate.

  13. Radar transponder antenna pattern analysis for the space shuttle

    NASA Technical Reports Server (NTRS)

    Radcliff, Roger

    1989-01-01

    In order to improve tracking capability, radar transponder antennas will soon be mounted on the Shuttle solid rocket boosters (SRB). These four antennas, each being identical cavity-backed helices operating at 5.765 GHz, will be mounted near the top of the SRB's, adjacent to the intertank portion of the external tank. The purpose is to calculate the roll-plane pattern (the plane perpendicular to the SRB axes and containing the antennas) in the presence of this complex electromagnetic environment. The large electrical size of this problem mandates an optical (asymptotic) approach. Development of a specific code for this application is beyond the scope of a summer fellowship; thus a general purpose code, the Numerical Electromagnetics Code - Basic Scattering Code, was chosen as the computational tool. This code is based on the modern Geometrical Theory of Diffraction, and allows computation of scattering of bodies composed of canonical problems such as plates and elliptic cylinders. Apertures mounted on a curved surface (the SRB) cannot be accomplished by the code, so an antenna model consisting of wires excited by a method of moments current input was devised that approximated the actual performance of the antennas. The improvised antenna model matched well with measurements taken at the MSFC range. The SRB's, the external tank, and the shuttle nose were modeled as circular cylinders, and the code was able to produce what is thought to be a reasonable roll-plane pattern.

  14. Automatic Adaptation to Fast Input Changes in a Time-Invariant Neural Circuit

    PubMed Central

    Bharioke, Arjun; Chklovskii, Dmitri B.

    2015-01-01

    Neurons must faithfully encode signals that can vary over many orders of magnitude despite having only limited dynamic ranges. For a correlated signal, this dynamic range constraint can be relieved by subtracting away components of the signal that can be predicted from the past, a strategy known as predictive coding, that relies on learning the input statistics. However, the statistics of input natural signals can also vary over very short time scales e.g., following saccades across a visual scene. To maintain a reduced transmission cost to signals with rapidly varying statistics, neuronal circuits implementing predictive coding must also rapidly adapt their properties. Experimentally, in different sensory modalities, sensory neurons have shown such adaptations within 100 ms of an input change. Here, we show first that linear neurons connected in a feedback inhibitory circuit can implement predictive coding. We then show that adding a rectification nonlinearity to such a feedback inhibitory circuit allows it to automatically adapt and approximate the performance of an optimal linear predictive coding network, over a wide range of inputs, while keeping its underlying temporal and synaptic properties unchanged. We demonstrate that the resulting changes to the linearized temporal filters of this nonlinear network match the fast adaptations observed experimentally in different sensory modalities, in different vertebrate species. Therefore, the nonlinear feedback inhibitory network can provide automatic adaptation to fast varying signals, maintaining the dynamic range necessary for accurate neuronal transmission of natural inputs. PMID:26247884

  15. An adaptable binary entropy coder

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.

    2001-01-01

    We present a novel entropy coding technique which is based on recursive interleaving of variable-to-variable length binary source codes. We discuss code design and performance estimation methods, as well as practical encoding and decoding algorithms.

  16. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope.

    PubMed

    Sulai, Yusufu N; Dubra, Alfredo

    2014-09-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth.

  17. A Neural Mechanism for Time-Window Separation Resolves Ambiguity of Adaptive Coding

    PubMed Central

    Hildebrandt, K. Jannis; Ronacher, Bernhard; Hennig, R. Matthias; Benda, Jan

    2015-01-01

    The senses of animals are confronted with changing environments and different contexts. Neural adaptation is one important tool to adjust sensitivity to varying intensity ranges. For instance, in a quiet night outdoors, our hearing is more sensitive than when we are confronted with the plurality of sounds in a large city during the day. However, adaptation also removes available information on absolute sound levels and may thus cause ambiguity. Experimental data on the trade-off between benefits and loss through adaptation is scarce and very few mechanisms have been proposed to resolve it. We present an example where adaptation is beneficial for one task—namely, the reliable encoding of the pattern of an acoustic signal—but detrimental for another—the localization of the same acoustic stimulus. With a combination of neurophysiological data, modeling, and behavioral tests, we show that adaptation in the periphery of the auditory pathway of grasshoppers enables intensity-invariant coding of amplitude modulations, but at the same time, degrades information available for sound localization. We demonstrate how focusing the response of localization neurons to the onset of relevant signals separates processing of localization and pattern information temporally. In this way, the ambiguity of adaptive coding can be circumvented and both absolute and relative levels can be processed using the same set of peripheral neurons. PMID:25761097

  18. The tactile speed aftereffect depends on the speed of adapting motion across the skin rather than other spatiotemporal features

    PubMed Central

    Seizova-Cajic, Tatjana; Holcombe, Alex O.

    2015-01-01

    After prolonged exposure to a surface moving across the skin, this felt movement appears slower, a phenomenon known as the tactile speed aftereffect (tSAE). We asked which feature of the adapting motion drives the tSAE: speed, the spacing between texture elements, or the frequency with which they cross the skin. After adapting to a ridged moving surface with one hand, participants compared the speed of test stimuli on adapted and unadapted hands. We used surfaces with different spatial periods (SPs; 3, 6, 12 mm) that produced adapting motion with different combinations of adapting speed (20, 40, 80 mm/s) and temporal frequency (TF; 3.4, 6.7, 13.4 ridges/s). The primary determinant of tSAE magnitude was speed of the adapting motion, not SP or TF. This suggests that adaptation occurs centrally, after speed has been computed from SP and TF, and/or that it reflects a speed cue independent of those features in the first place (e.g., indentation force). In a second experiment, we investigated the properties of the neural code for speed. Speed tuning predicts that adaptation should be greatest for speeds at or near the adapting speed. However, the tSAE was always stronger when the adapting stimulus was faster (242 mm/s) than the test (30–143 mm/s) compared with when the adapting and test speeds were matched. These results give no indication of speed tuning and instead suggest that adaptation occurs at a level where an intensive code dominates. In an intensive code, the faster the stimulus, the more the neurons fire. PMID:26631149

  19. 2D hydrodynamic simulations of a variable length gas target for density down-ramp injection of electrons into a laser wakefield accelerator

    NASA Astrophysics Data System (ADS)

    Kononenko, O.; Lopes, N. C.; Cole, J. M.; Kamperidis, C.; Mangles, S. P. D.; Najmudin, Z.; Osterhoff, J.; Poder, K.; Rusby, D.; Symes, D. R.; Warwick, J.; Wood, J. C.; Palmer, C. A. J.

    2016-09-01

    In this work, two-dimensional (2D) hydrodynamic simulations of a variable length gas cell were performed using the open source fluid code OpenFOAM. The gas cell was designed to study controlled injection of electrons into a laser-driven wakefield at the Astra Gemini laser facility. The target consists of two compartments: an accelerator and an injector section connected via an aperture. A sharp transition between the peak and plateau density regions in the injector and accelerator compartments, respectively, was observed in simulations with various inlet pressures. The fluid simulations indicate that the length of the down-ramp connecting the sections depends on the aperture diameter, as does the density drop outside the entrance and the exit cones. Further studies showed, that increasing the inlet pressure leads to turbulence and strong fluctuations in density along the axial profile during target filling, and consequently, is expected to negatively impact the accelerator stability.

  20. An adaptive distributed data aggregation based on RCPC for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Hua, Guogang; Chen, Chang Wen

    2006-05-01

    One of the most important design issues in wireless sensor networks is energy efficiency. Data aggregation has significant impact on the energy efficiency of the wireless sensor networks. With massive deployment of sensor nodes and limited energy supply, data aggregation has been considered as an essential paradigm for data collection in sensor networks. Recently, distributed source coding has been demonstrated to possess several advantages in data aggregation for wireless sensor networks. Distributed source coding is able to encode sensor data with lower bit rate without direct communication among sensor nodes. To ensure reliable and high throughput transmission with the aggregated data, we proposed in this research a progressive transmission and decoding of Rate-Compatible Punctured Convolutional (RCPC) coded data aggregation with distributed source coding. Our proposed 1/2 RSC codes with Viterbi algorithm for distributed source coding are able to guarantee that, even without any correlation between the data, the decoder can always decode the data correctly without wasting energy. The proposed approach achieves two aspects in adaptive data aggregation for wireless sensor networks. First, the RCPC coding facilitates adaptive compression corresponding to the correlation of the sensor data. When the data correlation is high, higher compression ration can be achieved. Otherwise, lower compression ratio will be achieved. Second, the data aggregation is adaptively accumulated. There is no waste of energy in the transmission; even there is no correlation among the data, the energy consumed is at the same level as raw data collection. Experimental results have shown that the proposed distributed data aggregation based on RCPC is able to achieve high throughput and low energy consumption data collection for wireless sensor networks

  1. Adaptive decoding of convolutional codes

    NASA Astrophysics Data System (ADS)

    Hueske, K.; Geldmacher, J.; Götze, J.

    2007-06-01

    Convolutional codes, which are frequently used as error correction codes in digital transmission systems, are generally decoded using the Viterbi Decoder. On the one hand the Viterbi Decoder is an optimum maximum likelihood decoder, i.e. the most probable transmitted code sequence is obtained. On the other hand the mathematical complexity of the algorithm only depends on the used code, not on the number of transmission errors. To reduce the complexity of the decoding process for good transmission conditions, an alternative syndrome based decoder is presented. The reduction of complexity is realized by two different approaches, the syndrome zero sequence deactivation and the path metric equalization. The two approaches enable an easy adaptation of the decoding complexity for different transmission conditions, which results in a trade-off between decoding complexity and error correction performance.

  2. Subaperture correlation based digital adaptive optics for full field optical coherence tomography.

    PubMed

    Kumar, Abhishek; Drexler, Wolfgang; Leitgeb, Rainer A

    2013-05-06

    This paper proposes a sub-aperture correlation based numerical phase correction method for interferometric full field imaging systems provided the complex object field information can be extracted. This method corrects for the wavefront aberration at the pupil/ Fourier transform plane without the need of any adaptive optics, spatial light modulators (SLM) and additional cameras. We show that this method does not require the knowledge of any system parameters. In the simulation study, we consider a full field swept source OCT (FF SSOCT) system to show the working principle of the algorithm. Experimental results are presented for a technical and biological sample to demonstrate the proof of the principle.

  3. Grid-Adapted FUN3D Computations for the Second High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Lee-Rausch, E. M.; Rumsey, C. L.; Park, M. A.

    2014-01-01

    Contributions of the unstructured Reynolds-averaged Navier-Stokes code FUN3D to the 2nd AIAA CFD High Lift Prediction Workshop are described, and detailed comparisons are made with experimental data. Using workshop-supplied grids, results for the clean wing configuration are compared with results from the structured code CFL3D Using the same turbulence model, both codes compare reasonably well in terms of total forces and moments, and the maximum lift is similarly over-predicted for both codes compared to experiment. By including more representative geometry features such as slat and flap brackets and slat pressure tube bundles, FUN3D captures the general effects of the Reynolds number variation, but under-predicts maximum lift on workshop-supplied grids in comparison with the experimental data, due to excessive separation. However, when output-based, off-body grid adaptation in FUN3D is employed, results improve considerably. In particular, when the geometry includes both brackets and the pressure tube bundles, grid adaptation results in a more accurate prediction of lift near stall in comparison with the wind-tunnel data. Furthermore, a rotation-corrected turbulence model shows improved pressure predictions on the outboard span when using adapted grids.

  4. Collimated proton pencil-beam scanning for superficial targets: impact of the order of range shifter and aperture

    NASA Astrophysics Data System (ADS)

    Bäumer, C.; Janson, M.; Timmermann, B.; Wulff, J.

    2018-04-01

    To assess if apertures shall be mounted upstream or downstream of a range shifting block if these field-shaping devices are combined with the pencil-beam scanning delivery technique (PBS). The lateral dose fall-off served as a benchmark parameter. Both options realizing PBS-with-apertures were compared to the uniform scanning mode. We also evaluated the difference regarding the out-of-field dose caused by interactions of protons in beam-shaping devices. The potential benefit of the downstream configuration over the upstream configuration was estimated analytically. Guided by this theoretical evaluation a mechanical adapter was developed which transforms the upstream configuration provided by the proton machine vendor to a downstream configuration. Transversal dose profiles were calculated with the Monte-Carlo based dose engine of the commercial treatment planning system RayStation 6. Two-dimensional dose planes were measured with an ionization chamber array and a scintillation detector at different depths and compared to the calculation. Additionally, a clinical example for the irradiation of the orbit was compared for both PBS options and a uniform scanning treatment plan. Assuming the same air gap the lateral dose fall-off at the field edge at a few centimeter depth is 20% smaller for the aperture-downstream configuration than for the upstream one. For both options of PBS-with-apertures the dose fall-off is larger than in uniform scanning delivery mode if the minimum accelerator energy is 100 MeV. The RayStation treatment planning system calculated the width of the lateral dose fall-off with an accuracy of typically 0.1 mm–0.3 mm. Although experiments and calculations indicate a ranking of the three delivery options regarding lateral dose fall-off, there seems to be a limited impact on a multi-field treatment plan.

  5. Polarization-multiplexed rate-adaptive non-binary-quasi-cyclic-LDPC-coded multilevel modulation with coherent detection for optical transport networks.

    PubMed

    Arabaci, Murat; Djordjevic, Ivan B; Saunders, Ross; Marcoccia, Roberto M

    2010-02-01

    In order to achieve high-speed transmission over optical transport networks (OTNs) and maximize its throughput, we propose using a rate-adaptive polarization-multiplexed coded multilevel modulation with coherent detection based on component non-binary quasi-cyclic (QC) LDPC codes. Compared to prior-art bit-interleaved LDPC-coded modulation (BI-LDPC-CM) scheme, the proposed non-binary LDPC-coded modulation (NB-LDPC-CM) scheme not only reduces latency due to symbol- instead of bit-level processing but also provides either impressive reduction in computational complexity or striking improvements in coding gain depending on the constellation size. As the paper presents, compared to its prior-art binary counterpart, the proposed NB-LDPC-CM scheme addresses the needs of future OTNs, which are achieving the target BER performance and providing maximum possible throughput both over the entire lifetime of the OTN, better.

  6. Transverse Diode Pumping of Solid-State Lasers

    DTIC Science & Technology

    1992-05-29

    more common apertures (laser rod end and cavity end mirror ) leads to a thin-film coating damage issue. The transverse pumped geometry avoids the...proprietary one-half inch square cooler developed for high-power adaptive optics mirror applications. The laser performance observed, with up to 35 watts of...including the development of active mirrors capable of sustaining high power loadings. As part of those efforts, TTC has developed a small (one-half inch

  7. Studying the Sky/Planets Can Drown You in Images: Machine Learning Solutions at JPL/Caltech

    NASA Technical Reports Server (NTRS)

    Fayyad, U. M.

    1995-01-01

    JPL is working to develop a domain-independent system capable of small-scale object recognition in large image databases for science analysis. Two applications discussed are the cataloging of three billion sky objects in the Sky Image Cataloging and Analysis Tool (SKICAT) and the detection of possibly one million small volcanoes visible in the Magellan synthetic aperture radar images of Venus (JPL Adaptive Recognition Tool, JARTool).

  8. Push type fastener

    NASA Technical Reports Server (NTRS)

    Jackson, Steven A. (Inventor)

    1994-01-01

    A push type fastener for fastening a movable structural part (41) to a fixed structural part (43), wherein the coupling and decoupling actions are both a push type operation, the fastener consisting of a plunger (12) having a shank (20) with a plunger head (18) at one end and a threaded end portion (26a) at the other end, an expandable grommet (14) adapted to receive the plunger shank (20) therethrough, and an attachable head (16) which is securable to the threaded end of the plunger shank (20). The fastener (10) requires each structural part (41, 43) to be provided with an aperture (45, 46) and the attachable head (16) to be smaller than the aperture (46) in the second structural part. The plunger (12) is extensible through the grommet (14) and is structurally configured with an external camming surface (25) which is cooperatively engageable with internal surfaces (38) of the grommet so that when the plunger is inserted in the grommet, the relative positioning of said cooperable camming surfaces determines the expansion of the grommet. Coupling of the parts is effected when the grommet is inserted in the aperture (46) in the fixed structural part (43) and expanded by pushing the plunger head (18) and plunger at least a minimal distance through the grommet. Decoupling is effected by pushing the attachable head (16).

  9. Reverberant acoustic energy in auditoria that comprise systems of coupled rooms

    NASA Astrophysics Data System (ADS)

    Summers, Jason E.

    2003-11-01

    A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.

  10. Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction

    NASA Astrophysics Data System (ADS)

    Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo

    2014-12-01

    To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.

  11. Distributed Learning, Recognition, and Prediction by ART and ARTMAP Neural Networks.

    PubMed

    Carpenter, Gail A.

    1997-11-01

    A class of adaptive resonance theory (ART) models for learning, recognition, and prediction with arbitrarily distributed code representations is introduced. Distributed ART neural networks combine the stable fast learning capabilities of winner-take-all ART systems with the noise tolerance and code compression capabilities of multilayer perceptrons. With a winner-take-all code, the unsupervised model dART reduces to fuzzy ART and the supervised model dARTMAP reduces to fuzzy ARTMAP. With a distributed code, these networks automatically apportion learned changes according to the degree of activation of each coding node, which permits fast as well as slow learning without catastrophic forgetting. Distributed ART models replace the traditional neural network path weight with a dynamic weight equal to the rectified difference between coding node activation and an adaptive threshold. Thresholds increase monotonically during learning according to a principle of atrophy due to disuse. However, monotonic change at the synaptic level manifests itself as bidirectional change at the dynamic level, where the result of adaptation resembles long-term potentiation (LTP) for single-pulse or low frequency test inputs but can resemble long-term depression (LTD) for higher frequency test inputs. This paradoxical behavior is traced to dual computational properties of phasic and tonic coding signal components. A parallel distributed match-reset-search process also helps stabilize memory. Without the match-reset-search system, dART becomes a type of distributed competitive learning network.

  12. Some practical universal noiseless coding techniques, part 3, module PSl14,K+

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.

    1991-01-01

    The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.

  13. Adaptive and reliably acknowledged FSO communications

    NASA Astrophysics Data System (ADS)

    Fitz, Michael P.; Halford, Thomas R.; Kose, Cenk; Cromwell, Jonathan; Gordon, Steven

    2015-05-01

    Atmospheric turbulence causes the receive signal intensity on free space optical (FSO) communication links to vary over time. Scintillation fades can stymie connectivity for milliseconds at a time. To approach the information-theoretic limits of communication in such time-varying channels, it necessary to either code across extremely long blocks of data - thereby inducing unacceptable delays - or to vary the code rate according to the instantaneous channel conditions. We describe the design, laboratory testing, and over-the-air testing of an FSO modem that employs a protocol with adaptive coded modulation (ACM) and hybrid automatic repeat request. For links with fixed throughput, this protocol provides a 10dB reduction in the required received signal-to-noise ratio (SNR); for links with fixed range, this protocol provides the greater than a 3x increase in throughput. Independent U.S. Government tests demonstrate that our protocol effectively adapts the code rate to match the instantaneous channel conditions. The modem is able to provide throughputs in excess of 850 Mbps on links with ranges greater than 15 kilometers.

  14. GIZMO: Multi-method magneto-hydrodynamics+gravity code

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2014-10-01

    GIZMO is a flexible, multi-method magneto-hydrodynamics+gravity code that solves the hydrodynamic equations using a variety of different methods. It introduces new Lagrangian Godunov-type methods that allow solving the fluid equations with a moving particle distribution that is automatically adaptive in resolution and avoids the advection errors, angular momentum conservation errors, and excessive diffusion problems that seriously limit the applicability of “adaptive mesh” (AMR) codes, while simultaneously avoiding the low-order errors inherent to simpler methods like smoothed-particle hydrodynamics (SPH). GIZMO also allows the use of SPH either in “traditional” form or “modern” (more accurate) forms, or use of a mesh. Self-gravity is solved quickly with a BH-Tree (optionally a hybrid PM-Tree for periodic boundaries) and on-the-fly adaptive gravitational softenings. The code is descended from P-GADGET, itself descended from GADGET-2 (ascl:0003.001), and many of the naming conventions remain (for the sake of compatibility with the large library of GADGET work and analysis software).

  15. FUN3D and CFL3D Computations for the First High Lift Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Lee-Rausch, Elizabeth M.; Rumsey, Christopher L.

    2011-01-01

    Two Reynolds-averaged Navier-Stokes codes were used to compute flow over the NASA Trapezoidal Wing at high lift conditions for the 1st AIAA CFD High Lift Prediction Workshop, held in Chicago in June 2010. The unstructured-grid code FUN3D and the structured-grid code CFL3D were applied to several different grid systems. The effects of code, grid system, turbulence model, viscous term treatment, and brackets were studied. The SST model on this configuration predicted lower lift than the Spalart-Allmaras model at high angles of attack; the Spalart-Allmaras model agreed better with experiment. Neglecting viscous cross-derivative terms caused poorer prediction in the wing tip vortex region. Output-based grid adaptation was applied to the unstructured-grid solutions. The adapted grids better resolved wake structures and reduced flap flow separation, which was also observed in uniform grid refinement studies. Limitations of the adaptation method as well as areas for future improvement were identified.

  16. A proposed study of multiple scattering through clouds up to 1 THz

    NASA Technical Reports Server (NTRS)

    Gerace, G. C.; Smith, E. K.

    1992-01-01

    A rigorous computation of the electromagnetic field scattered from an atmospheric liquid water cloud is proposed. The recent development of a fast recursive algorithm (Chew algorithm) for computing the fields scattered from numerous scatterers now makes a rigorous computation feasible. A method is presented for adapting this algorithm to a general case where there are an extremely large number of scatterers. It is also proposed to extend a new binary PAM channel coding technique (El-Khamy coding) to multiple levels with non-square pulse shapes. The Chew algorithm can be used to compute the transfer function of a cloud channel. Then the transfer function can be used to design an optimum El-Khamy code. In principle, these concepts can be applied directly to the realistic case of a time-varying cloud (adaptive channel coding and adaptive equalization). A brief review is included of some preliminary work on cloud dispersive effects on digital communication signals and on cloud liquid water spectra and correlations.

  17. Distributed Adaptive Binary Quantization for Fast Nearest Neighbor Search.

    PubMed

    Xianglong Liu; Zhujin Li; Cheng Deng; Dacheng Tao

    2017-11-01

    Hashing has been proved an attractive technique for fast nearest neighbor search over big data. Compared with the projection based hashing methods, prototype-based ones own stronger power to generate discriminative binary codes for the data with complex intrinsic structure. However, existing prototype-based methods, such as spherical hashing and K-means hashing, still suffer from the ineffective coding that utilizes the complete binary codes in a hypercube. To address this problem, we propose an adaptive binary quantization (ABQ) method that learns a discriminative hash function with prototypes associated with small unique binary codes. Our alternating optimization adaptively discovers the prototype set and the code set of a varying size in an efficient way, which together robustly approximate the data relations. Our method can be naturally generalized to the product space for long hash codes, and enjoys the fast training linear to the number of the training data. We further devise a distributed framework for the large-scale learning, which can significantly speed up the training of ABQ in the distributed environment that has been widely deployed in many areas nowadays. The extensive experiments on four large-scale (up to 80 million) data sets demonstrate that our method significantly outperforms state-of-the-art hashing methods, with up to 58.84% performance gains relatively.

  18. Developing an ethical code for engineers: the discursive approach.

    PubMed

    Lozano, J Félix

    2006-04-01

    From the Hippocratic Oath on, deontological codes and other professional self-regulation mechanisms have been used to legitimize and identify professional groups. New technological challenges and, above all, changes in the socioeconomic environment require adaptable codes which can respond to new demands. We assume that ethical codes for professionals should not simply focus on regulative functions, but must also consider ideological and educative functions. Any adaptations should take into account both contents (values, norms and recommendations) and the drafting process itself. In this article we propose a process for developing a professional ethical code for an official professional association (Colegio Oficial de Ingenieros Industriales de Valencia (COIIV) starting from the philosophical assumptions of discursive ethics but adapting them to critical hermeneutics. Our proposal is based on the Integrity Approach rather than the Compliance Approach. A process aiming to achieve an effective ethical document that fulfils regulative and ideological functions requires a participative, dialogical and reflexive methodology. This process must respond to moral exigencies and demands for efficiency and professional effectiveness. In addition to the methodological proposal we present our experience of producing an ethical code for the industrial engineers' association in Valencia (Spain) where this methodology was applied, and we evaluate the detected problems and future potential.

  19. Phase Contrast Wavefront Sensing for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Bloemhof, E. E.; Wallace, J. K.; Bloemhof, E. E.

    2004-01-01

    Most ground-based adaptive optics systems use one of a small number of wavefront sensor technologies, notably (for relatively high-order systems) the Shack-Hartmann sensor, which provides local measurements of the phase slope (first-derivative) at a number of regularly-spaced points across the telescope pupil. The curvature sensor, with response proportional to the second derivative of the phase, is also sometimes used, but has undesirable noise propagation properties during wavefront reconstruction as the number of actuators becomes large. It is interesting to consider the use for astronomical adaptive optics of the "phase contrast" technique, originally developed for microscopy by Zemike to allow convenient viewing of phase objects. In this technique, the wavefront sensor provides a direct measurement of the local value of phase in each sub-aperture of the pupil. This approach has some obvious disadvantages compared to Shack-Hartmann wavefront sensing, but has some less obvious but substantial advantages as well. Here we evaluate the relative merits in a practical ground-based adaptive optics system.

  20. Gain-adaptive vector quantization for medium-rate speech coding

    NASA Technical Reports Server (NTRS)

    Chen, J.-H.; Gersho, A.

    1985-01-01

    A class of adaptive vector quantizers (VQs) that can dynamically adjust the 'gain' of codevectors according to the input signal level is introduced. The encoder uses a gain estimator to determine a suitable normalization of each input vector prior to VQ coding. The normalized vectors have reduced dynamic range and can then be more efficiently coded. At the receiver, the VQ decoder output is multiplied by the estimated gain. Both forward and backward adaptation are considered and several different gain estimators are compared and evaluated. An approach to optimizing the design of gain estimators is introduced. Some of the more obvious techniques for achieving gain adaptation are substantially less effective than the use of optimized gain estimators. A novel design technique that is needed to generate the appropriate gain-normalized codebook for the vector quantizer is introduced. Experimental results show that a significant gain in segmental SNR can be obtained over nonadaptive VQ with a negligible increase in complexity.

  1. CosmosDG: An hp -adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anninos, Peter; Lau, Cheuk; Bryant, Colton

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge–Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performedmore » separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.« less

  2. CosmosDG: An hp-adaptive Discontinuous Galerkin Code for Hyper-resolved Relativistic MHD

    NASA Astrophysics Data System (ADS)

    Anninos, Peter; Bryant, Colton; Fragile, P. Chris; Holgado, A. Miguel; Lau, Cheuk; Nemergut, Daniel

    2017-08-01

    We have extended Cosmos++, a multidimensional unstructured adaptive mesh code for solving the covariant Newtonian and general relativistic radiation magnetohydrodynamic (MHD) equations, to accommodate both discrete finite volume and arbitrarily high-order finite element structures. The new finite element implementation, called CosmosDG, is based on a discontinuous Galerkin (DG) formulation, using both entropy-based artificial viscosity and slope limiting procedures for the regularization of shocks. High-order multistage forward Euler and strong-stability preserving Runge-Kutta time integration options complement high-order spatial discretization. We have also added flexibility in the code infrastructure allowing for both adaptive mesh and adaptive basis order refinement to be performed separately or simultaneously in a local (cell-by-cell) manner. We discuss in this report the DG formulation and present tests demonstrating the robustness, accuracy, and convergence of our numerical methods applied to special and general relativistic MHD, although we note that an equivalent capability currently also exists in CosmosDG for Newtonian systems.

  3. Exploring the read-write genome: mobile DNA and mammalian adaptation.

    PubMed

    Shapiro, James A

    2017-02-01

    The read-write genome idea predicts that mobile DNA elements will act in evolution to generate adaptive changes in organismal DNA. This prediction was examined in the context of mammalian adaptations involving regulatory non-coding RNAs, viviparous reproduction, early embryonic and stem cell development, the nervous system, and innate immunity. The evidence shows that mobile elements have played specific and sometimes major roles in mammalian adaptive evolution by generating regulatory sites in the DNA and providing interaction motifs in non-coding RNA. Endogenous retroviruses and retrotransposons have been the predominant mobile elements in mammalian adaptive evolution, with the notable exception of bats, where DNA transposons are the major agents of RW genome inscriptions. A few examples of independent but convergent exaptation of mobile DNA elements for similar regulatory rewiring functions are noted.

  4. An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA

    NASA Technical Reports Server (NTRS)

    Djomehri, M. Jahed; Erickson, Larry L.

    1994-01-01

    A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.

  5. TH-E-BRE-04: An Online Replanning Algorithm for VMAT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahunbay, E; Li, X; Moreau, M

    2014-06-15

    Purpose: To develop a fast replanning algorithm based on segment aperture morphing (SAM) for online replanning of volumetric modulated arc therapy (VMAT) with flattening filtered (FF) and flattening filter free (FFF) beams. Methods: A software tool was developed to interface with a VMAT planning system ((Monaco, Elekta), enabling the output of detailed beam/machine parameters of original VMAT plans generated based on planning CTs for FF or FFF beams. A SAM algorithm, previously developed for fixed-beam IMRT, was modified to allow the algorithm to correct for interfractional variations (e.g., setup error, organ motion and deformation) by morphing apertures based on themore » geometric relationship between the beam's eye view of the anatomy from the planning CT and that from the daily CT for each control point. The algorithm was tested using daily CTs acquired using an in-room CT during daily IGRT for representative prostate cancer cases along with their planning CTs. The algorithm allows for restricted MLC leaf travel distance between control points of the VMAT delivery to prevent SAM from increasing leaf travel, and therefore treatment delivery time. Results: The VMAT plans adapted to the daily CT by SAM were found to improve the dosimetry relative to the IGRT repositioning plans for both FF and FFF beams. For the adaptive plans, the changes in leaf travel distance between control points were < 1cm for 80% of the control points with no restriction. When restricted to the original plans' maximum travel distance, the dosimetric effect was minimal. The adaptive plans were delivered successfully with similar delivery times as the original plans. The execution of the SAM algorithm was < 10 seconds. Conclusion: The SAM algorithm can quickly generate deliverable online-adaptive VMAT plans based on the anatomy of the day for both FF and FFF beams.« less

  6. User's manual for a material transport code on the Octopus Computer Network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naymik, T.G.; Mendez, G.D.

    1978-09-15

    A code to simulate material transport through porous media was developed at Oak Ridge National Laboratory. This code has been modified and adapted for use at Lawrence Livermore Laboratory. This manual, in conjunction with report ORNL-4928, explains the input, output, and execution of the code on the Octopus Computer Network.

  7. X-ray Fluorescence Spectroscopy: the Potential of Astrophysics-developed Techniques

    NASA Astrophysics Data System (ADS)

    Elvis, M.; Allen, B.; Hong, J.; Grindlay, J.; Kraft, R.; Binzel, R. P.; Masterton, R.

    2012-12-01

    X-ray fluorescence from the surface of airless bodies has been studied since the Apollo X-ray fluorescence experiment mapped parts of the lunar surface in 1971-1972. That experiment used a collimated proportional counter with a resolving power of ~1 and a beam size of ~1degree. Filters separated only Mg, Al and SI lines. We review progress in X-ray detectors and imaging for astrophysics and show how these advances enable much more powerful use of X-ray fluorescence for the study of airless bodies. Astrophysics X-ray instrumentation has developed enormously since 1972. Low noise, high quantum efficiency, X-ray CCDs have flown on ASCA, XMM-Newton, the Chandra X-ray Observatory, Swift and Suzaku, and are the workhorses of X-ray astronomy. They normally span 0.5 to ~8 keV with an energy resolution of ~100 eV. New developments in silicon based detectors, especially individual pixel addressable devices, such as CMOS detectors, can withstand many orders of magnitude more radiation than conventional CCDs before degradation. The capability of high read rates provides dynamic range and temporal resolution. Additionally, the rapid read rates minimize shot noise from thermal dark current and optical light. CMOS detectors can therefore run at warmer temperatures and with ultra-thin optical blocking filters. Thin OBFs mean near unity quantum efficiency below 1 keV, thus maximizing response at the C and O lines.such as CMOS detectors, promise advances. X-ray imaging has advanced similarly far. Two types of imager are now available: specular reflection and coded apertures. X-ray mirrors have been flown on the Einstein Observatory, XMM-Newton, Chandra and others. However, as X-ray reflection only occurs at small (~1degree) incidence angles, which then requires long focal lengths (meters), mirrors are not usually practical for planetary missions. Moreover the field of view of X-ray mirrors is comparable to the incident angle, so can only image relatively small regions. More useful are coded-aperture imagers, which have flown on ART-P, Integral, and Swift. The shadow pattern from a 50% full mask allows the distribution of X-rays from a wide (10s of degrees) field of view to be imaged, but uniform emission presents difficulties. A version of a coded-aperture plus CCD detector for airless bodies study is being built for OSIRIS-REx as the student experiment REXIS. We will show the quality of the spectra that can be expected from this class of instrument.

  8. Language Recognition via Sparse Coding

    DTIC Science & Technology

    2016-09-08

    a posteriori (MAP) adaptation scheme that further optimizes the discriminative quality of sparse-coded speech fea - tures. We empirically validate the...significantly improve the discriminative quality of sparse-coded speech fea - tures. In Section 4, we evaluate the proposed approaches against an i-vector

  9. Algorithms for high-speed universal noiseless coding

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Yeh, Pen-Shu; Miller, Warner

    1993-01-01

    This paper provides the basic algorithmic definitions and performance characterizations for a high-performance adaptive noiseless (lossless) 'coding module' which is currently under separate developments as single-chip microelectronic circuits at two NASA centers. Laboratory tests of one of these implementations recently demonstrated coding rates of up to 900 Mbits/s. Operation of a companion 'decoding module' can operate at up to half the coder's rate. The functionality provided by these modules should be applicable to most of NASA's science data. The hardware modules incorporate a powerful adaptive noiseless coder for 'standard form' data sources (i.e., sources whose symbols can be represented by uncorrelated nonnegative integers where the smaller integers are more likely than the larger ones). Performance close to data entries can be expected over a 'dynamic range' of from 1.5 to 12-15 bits/sample (depending on the implementation). This is accomplished by adaptively choosing the best of many Huffman equivalent codes to use on each block of 1-16 samples. Because of the extreme simplicity of these codes no table lookups are actually required in an implementation, thus leading to the expected very high data rate capabilities already noted.

  10. Adaptive compliant structures for flow regulation

    PubMed Central

    Brinkmeyer, Alex; Theunissen, Raf; M. Weaver, Paul; Pirrera, Alberto

    2017-01-01

    This paper introduces conceptual design principles for a novel class of adaptive structures that provide both flow regulation and control. While of general applicability, these design principles, which revolve around the idea of using the instabilities and elastically nonlinear behaviour of post-buckled panels, are exemplified through a case study: the design of a shape-adaptive air inlet. The inlet comprises a deformable post-buckled member that changes shape depending on the pressure field applied by the surrounding fluid, thereby regulating the inlet aperture. By tailoring the stress field in the post-buckled state and the geometry of the initial, stress-free configuration, the deformable section can snap through to close or open the inlet completely. Owing to its inherent ability to change shape in response to external stimuli—i.e. the aerodynamic loads imposed by different operating conditions—the inlet does not have to rely on linkages and mechanisms for actuation, unlike conventional flow-controlling devices. PMID:28878567

  11. Adaptive compliant structures for flow regulation.

    PubMed

    Arena, Gaetano; M J Groh, Rainer; Brinkmeyer, Alex; Theunissen, Raf; M Weaver, Paul; Pirrera, Alberto

    2017-08-01

    This paper introduces conceptual design principles for a novel class of adaptive structures that provide both flow regulation and control. While of general applicability, these design principles, which revolve around the idea of using the instabilities and elastically nonlinear behaviour of post-buckled panels, are exemplified through a case study: the design of a shape-adaptive air inlet. The inlet comprises a deformable post-buckled member that changes shape depending on the pressure field applied by the surrounding fluid, thereby regulating the inlet aperture. By tailoring the stress field in the post-buckled state and the geometry of the initial, stress-free configuration, the deformable section can snap through to close or open the inlet completely. Owing to its inherent ability to change shape in response to external stimuli-i.e. the aerodynamic loads imposed by different operating conditions-the inlet does not have to rely on linkages and mechanisms for actuation, unlike conventional flow-controlling devices.

  12. Non-common path aberration correction in an adaptive optics scanning ophthalmoscope

    PubMed Central

    Sulai, Yusufu N.; Dubra, Alfredo

    2014-01-01

    The correction of non-common path aberrations (NCPAs) between the imaging and wavefront sensing channel in a confocal scanning adaptive optics ophthalmoscope is demonstrated. NCPA correction is achieved by maximizing an image sharpness metric while the confocal detection aperture is temporarily removed, effectively minimizing the monochromatic aberrations in the illumination path of the imaging channel. Comparison of NCPA estimated using zonal and modal orthogonal wavefront corrector bases provided wavefronts that differ by ~λ/20 in root-mean-squared (~λ/30 standard deviation). Sequential insertion of a cylindrical lens in the illumination and light collection paths of the imaging channel was used to compare image resolution after changing the wavefront correction to maximize image sharpness and intensity metrics. Finally, the NCPA correction was incorporated into the closed-loop adaptive optics control by biasing the wavefront sensor signals without reducing its bandwidth. PMID:25401020

  13. Adaptive thresholding algorithm based on SAR images and wind data to segment oil spills along the northwest coast of the Iberian Peninsula.

    PubMed

    Mera, David; Cotos, José M; Varela-Pet, José; Garcia-Pineda, Oscar

    2012-10-01

    Satellite Synthetic Aperture Radar (SAR) has been established as a useful tool for detecting hydrocarbon spillage on the ocean's surface. Several surveillance applications have been developed based on this technology. Environmental variables such as wind speed should be taken into account for better SAR image segmentation. This paper presents an adaptive thresholding algorithm for detecting oil spills based on SAR data and a wind field estimation as well as its implementation as a part of a functional prototype. The algorithm was adapted to an important shipping route off the Galician coast (northwest Iberian Peninsula) and was developed on the basis of confirmed oil spills. Image testing revealed 99.93% pixel labelling accuracy. By taking advantage of multi-core processor architecture, the prototype was optimized to get a nearly 30% improvement in processing time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Regional vertical total electron content (VTEC) modeling together with satellite and receiver differential code biases (DCBs) using semi-parametric multivariate adaptive regression B-splines (SP-BMARS)

    NASA Astrophysics Data System (ADS)

    Durmaz, Murat; Karslioglu, Mahmut Onur

    2015-04-01

    There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.

  15. An Adaptive Ship Detection Scheme for Spaceborne SAR Imagery

    PubMed Central

    Leng, Xiangguang; Ji, Kefeng; Zhou, Shilin; Xing, Xiangwei; Zou, Huanxin

    2016-01-01

    With the rapid development of spaceborne synthetic aperture radar (SAR) and the increasing need of ship detection, research on adaptive ship detection in spaceborne SAR imagery is of great importance. Focusing on practical problems of ship detection, this paper presents a highly adaptive ship detection scheme for spaceborne SAR imagery. It is able to process a wide range of sensors, imaging modes and resolutions. Two main stages are identified in this paper, namely: ship candidate detection and ship discrimination. Firstly, this paper proposes an adaptive land masking method using ship size and pixel size. Secondly, taking into account the imaging mode, incidence angle, and polarization channel of SAR imagery, it implements adaptive ship candidate detection in spaceborne SAR imagery by applying different strategies to different resolution SAR images. Finally, aiming at different types of typical false alarms, this paper proposes a comprehensive ship discrimination method in spaceborne SAR imagery based on confidence level and complexity analysis. Experimental results based on RADARSAT-1, RADARSAT-2, TerraSAR-X, RS-1, and RS-3 images demonstrate that the adaptive scheme proposed in this paper is able to detect ship targets in a fast, efficient and robust way. PMID:27563902

  16. Layer-oriented simulation tool.

    PubMed

    Arcidiacono, Carmelo; Diolaiti, Emiliano; Tordi, Massimiliano; Ragazzoni, Roberto; Farinato, Jacopo; Vernet, Elise; Marchetti, Enrico

    2004-08-01

    The Layer-Oriented Simulation Tool (LOST) is a numerical simulation code developed for analysis of the performance of multiconjugate adaptive optics modules following a layer-oriented approach. The LOST code computes the atmospheric layers in terms of phase screens and then propagates the phase delays introduced in the natural guide stars' wave fronts by using geometrical optics approximations. These wave fronts are combined in an optical or numerical way, including the effects of wave-front sensors on measurements in terms of phase noise. The LOST code is described, and two applications to layer-oriented modules are briefly presented. We have focus on the Multiconjugate adaptive optics demonstrator to be mounted upon the Very Large Telescope and on the Near-IR-Visible Adaptive Interferometer for Astronomy (NIRVANA) interferometric system to be installed on the combined focus of the Large Binocular Telescope.

  17. 2ND International Workshop on Adaptive Optics for Industry and Medicine.

    DTIC Science & Technology

    2000-02-08

    The spots are well-separated, and there are only very weak interference peaks between adjacent spots, so identification of the spots is easy and...for transmission through an interference filter, a polarizing filter, the SLM, and a 12 mm diameter aperture to mask the active area in the SLM. A... interfere greatly with the visibility of the primary image. However, as the SLM power increases so does the contrast of the secondary images and

  18. Factors influencing the drain and rinse operation of Banana screens

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Brien, M.; Firth, B.

    An Australian Coal Association Research Project (ACARP) study to identify the variables and effects on Banana screens is described in this article. The impacts of the following system variables were investigated: panel angle, volumetric feed flow rate, solids content of feed screen motion, vibration frequency, magnetite content and impact of screen aperture. The article was adapted from a presentation at Coal Prep 2005, Lexington, KY, USA in May 2005. 4 refs., 8 figs., 1 tab.

  19. Adaptive Optics for Turbulent Shear Layers

    DTIC Science & Technology

    2006-12-20

    which is related to the phase variance through the wave number). Once obtained 5 by whatever means, the phase variance was used to compute an...new inlet for a wider test section and included a downstream throat to chock the flow and prevent pressure disturbances from the vacuum pumps from...wavefronts in Figure 15 are for an aperture size that captures only about half a cycle; the streamnwise length of the half cycle being -- 10 cm ( 4 in

  20. Slant-path coherent free space optical communications over the maritime and terrestrial atmospheres with the use of adaptive optics for beam wavefront correction.

    PubMed

    Li, Ming; Gao, Wenbo; Cvijetic, Milorad

    2017-01-10

    As a continuation of our previous work [Appl. Opt.54, 1453 (2015)APOPAI0003-693510.1364/AO.54.001453] in which we have studied the performance of coherent free space optical (FSO) communication systems operating over a horizontal path, in this paper we study the coherent FSO system operating over a general slant path. We evaluated system bit-error-rate (BER) in the case when the quadrature phase-shift keying (QPSK) modulation format is applied and when an adaptive optics (AO) system is employed to mitigate the air turbulence effects for both maritime and terrestrial air transmission scenarios. We adopted a multiple-layer scheme to efficiently model the FSO slant-path links. The atmospheric channel fading was characterized by the wavefront phase distortions and the log-amplitude fluctuations. We derived analytical expressions to characterize log-amplitude fluctuations of air turbulence by asserting the aperture averaging within the frame of the multiple-layer model. The obtained results showed that use of AO enabled improvement of system performance for both uplinks and downlinks, and also revealed that it is more beneficial for the FSO downlinks. Also, AO employment brought larger enhancements in BER performance for the maritime slant-path FSO links than for the terrestrial ones, with an additional striking increase in performance when the AO correction is combined with the aperture averaging.

  1. Different pitcher shapes and trapping syndromes explain resource partitioning in Nepenthes species.

    PubMed

    Gaume, Laurence; Bazile, Vincent; Huguin, Maïlis; Bonhomme, Vincent

    2016-03-01

    Nepenthes pitcher plants display interspecific diversity in pitcher form and diets. This species-rich genus might be a conspicuous candidate for an adaptive radiation. However, the pitcher traits of different species have never been quantified in a comparative study, nor have their possible adaptations to the resources they exploit been tested. In this study, we compare the pitcher features and prey composition of the seven Nepenthes taxa that grow in the heath forest of Brunei (Borneo) and investigate whether these species display different trapping syndromes that target different prey. The Nepenthes species are shown to display species-specific combinations of pitcher shapes, volumes, rewards, attraction and capture traits, and different degrees of ontogenetic pitcher dimorphism. The prey spectra also differ among plant species and between ontogenetic morphotypes in their combinations of ants, flying insects, termites, and noninsect guilds. According to a discriminant analysis, the Nepenthes species collected at the same site differ significantly in prey abundance and composition at the level of order, showing niche segregation but with varying degrees of niche overlap according to pairwise species comparisons. Weakly carnivorous species are first characterized by an absence of attractive traits. Generalist carnivorous species have a sweet odor, a wide pitcher aperture, and an acidic pitcher fluid. Guild specializations are explained by different combinations of morpho-functional traits. Ant captures increase with extrafloral nectar, fluid acidity, and slippery waxy walls. Termite captures increase with narrowness of pitchers, presence of a rim of edible trichomes, and symbiotic association with ants. The abundance of flying insects is primarily correlated with pitcher conicity, pitcher aperture diameter, and odor presence. Such species-specific syndromes favoring resource partitioning may result from local character displacement by competition and/or previous adaptations to geographically distinct environments.

  2. Throughput Optimization Via Adaptive MIMO Communications

    DTIC Science & Technology

    2006-05-30

    End-to-end matlab packet simulation platform. * Low density parity check code (LDPCC). * Field trials with Silvus DSP MIMO testbed. * High mobility...incorporate advanced LDPC (low density parity check) codes . Realizing that the power of LDPC codes come at the price of decoder complexity, we also...Channel Coding Binary Convolution Code or LDPC Packet Length 0 - 216-1, bytes Coding Rate 1/2, 2/3, 3/4, 5/6 MIMO Channel Training Length 0 - 4, symbols

  3. Developing a method for specifying the components of behavior change interventions in practice: the example of smoking cessation.

    PubMed

    Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan

    2013-06-01

    There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.

  4. Optimum Boundaries of Signal-to-Noise Ratio for Adaptive Code Modulations

    DTIC Science & Technology

    2017-11-14

    1510–1521, Feb. 2015. [2]. Pursley, M. B. and Royster, T. C., “Adaptive-rate nonbinary LDPC coding for frequency - hop communications ,” IEEE...and this can cause a very narrowband noise near the center frequency during USRP signal acquisition and generation. This can cause a high BER...Final Report APPROVED FOR PUBLIC RELEASE; DISTRIBUTION IS UNLIMITED. AIR FORCE RESEARCH LABORATORY Space Vehicles Directorate 3550 Aberdeen Ave

  5. A multiblock/multizone code (PAB 3D-v2) for the three-dimensional Navier-Stokes equations: Preliminary applications

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.

    1990-01-01

    The development and applications of multiblock/multizone and adaptive grid methodologies for solving the three-dimensional simplified Navier-Stokes equations are described. Adaptive grid and multiblock/multizone approaches are introduced and applied to external and internal flow problems. These new implementations increase the capabilities and flexibility of the PAB3D code in solving flow problems associated with complex geometry.

  6. Extra Solar Planet Science With a Non Redundant Mask

    NASA Astrophysics Data System (ADS)

    Minto, Stefenie Nicolet; Sivaramakrishnan, Anand; Greenbaum, Alexandra; St. Laurent, Kathryn; Thatte, Deeparshi

    2017-01-01

    To detect faint planetary companions near a much brighter star, at the Resolution Limit of the James Webb Space Telescope (JWST) the Near-Infrared Imager and Slitless Spectrograph (NIRISS) will use a non-redundant aperture mask (NRM) for high contrast imaging. I simulated NIRISS data of stars with and without planets, and run these through the code that measures interferometric image properties to determine how sensitive planetary detection is to our knowledge of instrumental parameters, starting with the pixel scale. I measured the position angle, distance, and contrast ratio of the planet (with respect to the star) to characterize the binary pair. To organize this data I am creating programs that will automatically and systematically explore multi-dimensional instrument parameter spaces and binary characteristics. In the future my code will also be applied to explore any other parameters we can simulate.

  7. NASA Tech Briefs, June 2005

    NASA Technical Reports Server (NTRS)

    2005-01-01

    Topics covered include: Apparatus Characterizes Transient Voltages in Real Time; Measuring Humidity in Sealed Glass Encasements; Adaptable System for Vehicle Health and Usage Monitoring; Miniature Focusing Time-of-Flight Mass Spectrometer; Cryogenic High-Sensitivity Magnetometer; Wheel Electrometer System; Carbon-Nanotube Conductive Layers for Thin-Film Solar Cells; Patch Antenna Fed via Unequal-Crossed-Arm Aperture; LC Circuits for Diagnosing Embedded Piezoelectric Devices; Nanowire Thermoelectric Devices; Code for Analyzing and Designing Spacecraft Power System Radiators; Decision Support for Emergency Operations Centers; NASA Records Database; Real-Time Principal- Component Analysis; Fuzzy/Neural Software Estimates Costs of Rocket- Engine Tests; Multicomponent, Rare-Earth-Doped Thermal-Barrier Coatings; Reactive Additives for Phenylethynyl-Containing Resins; Improved Gear Shapes for Face Worm Gear Drives; Alternative Way of Shifting Mass to Move a Spherical Robot; Parylene C as a Sacrificial Material for Microfabrication; In Situ Electrochemical Deposition of Microscopic Wires; Improved Method of Manufacturing SiC Devices; Microwave Treatment of Prostate Cancer and Hyperplasia; Ferroelectric Devices Emit Charged Particles and Radiation; Dusty-Plasma Particle Accelerator; Frozen-Plug Technique for Liquid-Oxygen Plumbing; Shock Waves in a Bose-Einstein Condensate; Progress on a Multichannel, Dual-Mixer Stability Analyzer; Development of Carbon- Nanotube/Polymer Composites; Thermal Imaging of Earth for Accurate Pointing of Deep-Space Antennas; Modifications of a Composite-Material Combustion Chamber; Modeling and Diagnostic Software for Liquefying- Fuel Rockets; and Spacecraft Antenna Clusters for High EIRP.

  8. Chaotic dynamics in accelerator physics

    NASA Astrophysics Data System (ADS)

    Cary, J. R.

    1992-11-01

    Substantial progress was made in several areas of accelerator dynamics. We have completed a design of an FEL wiggler with adiabatic trapping and detrapping sections to develop an understanding of longitudinal adiabatic dynamics and to create efficiency enhancements for recirculating free-electron lasers. We developed a computer code for analyzing the critical KAM tori that binds the dynamic aperture in circular machines. Studies of modes that arise due to the interaction of coating beams with a narrow-spectrum impedance have begun. During this research educational and research ties with the accelerator community at large have been strengthened.

  9. Design and Measurements of Dual-Polarized Wideband Constant-Beamwidth Quadruple-Ridged Flared Horn

    NASA Technical Reports Server (NTRS)

    Akgiray, Ahmed; Weinreb, Sander; Imbriale, William

    2011-01-01

    A quad-ridged, flared horn achieving nearly constant beamwidth and excellent return loss over a 6:1 frequency bandwidth is presented. Radiation pattern measurements show excellent beamwidth stability from 2 to 12 GHz. Measured return loss is > 10 dB over the entire band and > 15 dB from 2.5 to 11 GHz. Using a custom physical optics code, system performance of a radio telescope is computed and predicted performance is average 70% aperture efficiency and 10 Kelvin of antenna noise temperature.

  10. Detecting Faults in Southern California using Computer-Vision Techniques and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Interferometry

    NASA Astrophysics Data System (ADS)

    Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.

    2013-12-01

    Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can be paired with infrastructure overlays, allowing emergency response teams to identify sites that may have been exposed to damage. The faults will also be incorporated into a database for future integration into fault models and earthquake simulations, improving future earthquake hazard assessment. As new faults are mapped, they will further understanding of the complex fault systems and earthquake hazards within the seismically dynamic state of California.

  11. A P-band SAR interference filter

    NASA Technical Reports Server (NTRS)

    Taylor, Victor B.

    1992-01-01

    The synthetic aperture radar (SAR) interference filter is an adaptive filter designed to reduce the effects of interference while minimizing the introduction of undesirable side effects. The author examines the adaptive spectral filter and the improvement in processed SAR imagery using this filter for Jet Propulsion Laboratory Airborne SAR (JPL AIRSAR) data. The quality of these improvements is determined through several data fidelity criteria, such as point-target impulse response, equivalent number of looks, SNR, and polarization signatures. These parameters are used to characterize two data sets, both before and after filtering. The first data set consists of data with the interference present in the original signal, and the second set consists of clean data which has been coherently injected with interference acquired from another scene.

  12. An adaptive array antenna for mobile satellite communications

    NASA Technical Reports Server (NTRS)

    Milne, Robert

    1988-01-01

    The adaptive array is linearly polarized and consists essentially of a driven lambda/4 monopole surrounded by an array of parasitic elements all mounted on a ground plane of finite size. The parasitic elements are all connected to ground via pin diodes. By applying suitable bias voltages, the desired parasitic elements can be activated and made highly reflective. The directivity and pointing of the antenna beam can be controlled in both the azimuth and elevation planes using high speed digital switching techniques. The antenna RF losses are neglible and the maximum gain is close to the theoretical value determined by the effective aperture size. The antenna is compact, has a low profile, is inexpensive to manufacture and can handle high transmitter power.

  13. Robust adaptive multichannel SAR processing based on covariance matrix reconstruction

    NASA Astrophysics Data System (ADS)

    Tan, Zhen-ya; He, Feng

    2018-04-01

    With the combination of digital beamforming (DBF) processing, multichannel synthetic aperture radar(SAR) systems in azimuth promise well in high-resolution and wide-swath imaging, whereas conventional processing methods don't take the nonuniformity of scattering coefficient into consideration. This paper brings up a robust adaptive Multichannel SAR processing method which utilizes the Capon spatial spectrum estimator to obtain the spatial spectrum distribution over all ambiguous directions first, and then the interference-plus-noise covariance Matrix is reconstructed based on definition to acquire the Multichannel SAR processing filter. The performance of processing under nonuniform scattering coefficient is promoted by this novel method and it is robust again array errors. The experiments with real measured data demonstrate the effectiveness and robustness of the proposed method.

  14. Adaptive nonlinear L2 and L3 filters for speckled image processing

    NASA Astrophysics Data System (ADS)

    Lukin, Vladimir V.; Melnik, Vladimir P.; Chemerovsky, Victor I.; Astola, Jaakko T.

    1997-04-01

    Here we propose adaptive nonlinear filters based on calculation and analysis of two or three order statistics in a scanning window. They are designed for processing images corrupted by severe speckle noise with non-symmetrical. (Rayleigh or one-side exponential) distribution laws; impulsive noise can be also present. The proposed filtering algorithms provide trade-off between impulsive noise can be also present. The proposed filtering algorithms provide trade-off between efficient speckle noise suppression, robustness, good edge/detail preservation, low computational complexity, preservation of average level for homogeneous regions of images. Quantitative evaluations of the characteristics of the proposed filter are presented as well as the results of the application to real synthetic aperture radar and ultrasound medical images.

  15. Nevada Administrative Code for Special Education Programs.

    ERIC Educational Resources Information Center

    Nevada State Dept. of Education, Carson City. Special Education Branch.

    This document presents excerpts from Chapter 388 of the Nevada Administrative Code, which concerns definitions, eligibility, and programs for students who are disabled or gifted/talented. The first section gathers together 36 relevant definitions from the Code for such concepts as "adaptive behavior,""autism,""gifted and…

  16. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    PubMed

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  17. Critical roles for a genetic code alteration in the evolution of the genus Candida.

    PubMed

    Silva, Raquel M; Paredes, João A; Moura, Gabriela R; Manadas, Bruno; Lima-Costa, Tatiana; Rocha, Rita; Miranda, Isabel; Gomes, Ana C; Koerkamp, Marian J G; Perrot, Michel; Holstege, Frank C P; Boucherie, Hélian; Santos, Manuel A S

    2007-10-31

    During the last 30 years, several alterations to the standard genetic code have been discovered in various bacterial and eukaryotic species. Sense and nonsense codons have been reassigned or reprogrammed to expand the genetic code to selenocysteine and pyrrolysine. These discoveries highlight unexpected flexibility in the genetic code, but do not elucidate how the organisms survived the proteome chaos generated by codon identity redefinition. In order to shed new light on this question, we have reconstructed a Candida genetic code alteration in Saccharomyces cerevisiae and used a combination of DNA microarrays, proteomics and genetics approaches to evaluate its impact on gene expression, adaptation and sexual reproduction. This genetic manipulation blocked mating, locked yeast in a diploid state, remodelled gene expression and created stress cross-protection that generated adaptive advantages under environmental challenging conditions. This study highlights unanticipated roles for codon identity redefinition during the evolution of the genus Candida, and strongly suggests that genetic code alterations create genetic barriers that speed up speciation.

  18. Adaptive Precoded MIMO for LTE Wireless Communication

    NASA Astrophysics Data System (ADS)

    Nabilla, A. F.; Tiong, T. C.

    2015-04-01

    Long-Term Evolution (LTE) and Long Term Evolution-Advanced (ATE-A) have provided a major step forward in mobile communication capability. The objectives to be achieved are high peak data rates in high spectrum bandwidth and high spectral efficiencies. Technically, pre-coding means that multiple data streams are emitted from the transmit antenna with independent and appropriate weightings such that the link throughput is maximized at the receiver output thus increasing or equalizing the received signal to interference and noise (SINR) across the multiple receiver terminals. However, it is not reliable enough to fully utilize the information transfer rate to fit the condition of channel according to the bandwidth size. Thus, adaptive pre-coding is proposed. It applies pre-coding matrix indicator (PMI) channel state making it possible to change the pre-coding codebook accordingly thus improving the data rate higher than fixed pre-coding.

  19. Biometric iris image acquisition system with wavefront coding technology

    NASA Astrophysics Data System (ADS)

    Hsieh, Sheng-Hsun; Yang, Hsi-Wen; Huang, Shao-Hung; Li, Yung-Hui; Tien, Chung-Hao

    2013-09-01

    Biometric signatures for identity recognition have been practiced for centuries. Basically, the personal attributes used for a biometric identification system can be classified into two areas: one is based on physiological attributes, such as DNA, facial features, retinal vasculature, fingerprint, hand geometry, iris texture and so on; the other scenario is dependent on the individual behavioral attributes, such as signature, keystroke, voice and gait style. Among these features, iris recognition is one of the most attractive approaches due to its nature of randomness, texture stability over a life time, high entropy density and non-invasive acquisition. While the performance of iris recognition on high quality image is well investigated, not too many studies addressed that how iris recognition performs subject to non-ideal image data, especially when the data is acquired in challenging conditions, such as long working distance, dynamical movement of subjects, uncontrolled illumination conditions and so on. There are three main contributions in this paper. Firstly, the optical system parameters, such as magnification and field of view, was optimally designed through the first-order optics. Secondly, the irradiance constraints was derived by optical conservation theorem. Through the relationship between the subject and the detector, we could estimate the limitation of working distance when the camera lens and CCD sensor were known. The working distance is set to 3m in our system with pupil diameter 86mm and CCD irradiance 0.3mW/cm2. Finally, We employed a hybrid scheme combining eye tracking with pan and tilt system, wavefront coding technology, filter optimization and post signal recognition to implement a robust iris recognition system in dynamic operation. The blurred image was restored to ensure recognition accuracy over 3m working distance with 400mm focal length and aperture F/6.3 optics. The simulation result as well as experiment validates the proposed code apertured imaging system, where the imaging volume was 2.57 times extended over the traditional optics, while keeping sufficient recognition accuracy.

  20. Norm-based coding of facial identity in adults with autism spectrum disorder.

    PubMed

    Walsh, Jennifer A; Maurer, Daphne; Vida, Mark D; Rhodes, Gillian; Jeffery, Linda; Rutherford, M D

    2015-03-01

    It is unclear whether reported deficits in face processing in individuals with autism spectrum disorders (ASD) can be explained by deficits in perceptual face coding mechanisms. In the current study, we examined whether adults with ASD showed evidence of norm-based opponent coding of facial identity, a perceptual process underlying the recognition of facial identity in typical adults. We began with an original face and an averaged face and then created an anti-face that differed from the averaged face in the opposite direction from the original face by a small amount (near adaptor) or a large amount (far adaptor). To test for norm-based coding, we adapted participants on different trials to the near versus far adaptor, then asked them to judge the identity of the averaged face. We varied the size of the test and adapting faces in order to reduce any contribution of low-level adaptation. Consistent with the predictions of norm-based coding, high functioning adults with ASD (n = 27) and matched typical participants (n = 28) showed identity aftereffects that were larger for the far than near adaptor. Unlike results with children with ASD, the strength of the aftereffects were similar in the two groups. This is the first study to demonstrate norm-based coding of facial identity in adults with ASD. Copyright © 2015 Elsevier Ltd. All rights reserved.

Top